Loyola Magazine
A 500-Year-Old Approach to Artificial Intelligence
Illustration of a computer chip with a block Loyola 'L' in the center of it

A Jesuit Approach to Artificial Intelligence (AI)

Loyola Faculty and Alumni Share Their Insights into How Technology is Changing the World
The Loyola Charles Street Bridge The Loyola Charles Street Bridge modified by AI to show water and a foundation underneath
Left: In real life, the USF&G Pedestrian Bridge extends across North Charles Street, crossed by hundreds of Loyola students and faculty each day on their way to and from classes, dining, clubs and activities, and student residence halls. Right: The original photo has been altered through the use of artificial intelligence. In the hands of a creative human designer, Charles Street has been reimagined with a pond and fountain. In the past, altering and enhancing a photo would take hours. Now, a designer can create these images in minutes.

Since the launch of ChatGPT in November 2022, the artificial intelligence chatbot seems to be making headlines everywhere.

Academics are concerned—and thrilled—about the rise of AI and its potential for the future of higher education.

The Jesuit approach to education, including intellectual excellence, critical understanding, and a constant challenge to improve, positions Loyola faculty, students, and alumni to use technology responsibly and to create positive change in our world. But naturally, with new technology comes many questions: How advanced is AI—and where is it heading next? How can AI help students in the classroom? How can universities ensure students continue to approach learning with academic integrity?

Faculty, administrators, and students at Loyola are deep in conversation about the opportunities and challenges that AI presents. And Loyola alumni bring a depth and breadth of understanding to a topic that is changing business, education, and society.

AI in the World

For many Loyola alumni working in technology, the rise of artificial intelligence has been met with cautious optimism, a Jesuit openness to discovering new possibilities.

An early adopter of the tech was Peter Guerra, head of Data and AI Service Line at Microsoft, who received his MBA from Loyola with an Information Systems concentration in 2014. In his current role, Guerra helps companies and public sector customers build AI solutions to their hardest challenges. He’s been using AI for almost 20 years.

Ethical conduct and a moral compass are necessary in the field of data science and AI.

—Jessica Wade, M.S. Data Science '23

—Jessica Wade, M.S., ’23

“With the recent advances [in AI], we are working with customers to drive many new solutions that weren’t possible just five years ago,” he says. “It’s an exciting time to be in the field.”

But, Guerra notes, AI is currently in its “hype cycle,” and he advises responsible AI review boards, carefully considered processes, and rigorous testing.

“These aren’t solutions that work right out of the box, so to speak; they require a lot of planning and testing before being launched,” he notes. “AI will never replace human creativity and ingenuity. AI is a tool we will leverage to tell us the ‘what,’ but I believe it will only ever be an augmentation for the ‘so what’ and ‘now what.’”

As founder of LivePerson, Robert LoCascio, ’90, has also been focused on AI for years. In 1997, he was a pioneer in creating webchats to improve the customer experience—which ultimately grew into his company that uses technology to help brands hold conversations with their customers.

“In the world today, AI is the next big leg of giving humanity back time,” said LoCascio, who graduated from Loyola with a Bachelor of Business Administration with a minor in English literature.

“AI is one of the highest forms of augmenting a human in thought—a machine can do these tasks that can be fairly complex, and a machine can even connect with a human.” Although AI hasn’t figured out how to replicate consciousness, LoCascio says, it provides reasoning as an outcome based on information provided to it.

The father of three envisions a future for his young children where AI really gives them back time in their lives. “My kids will definitely have something they own and control in their life that helps them solve problems, and it will be very personalized,” he says.

Recognizing some of the challenges that need to be overcome, he co-created EqualAI, which focuses on how to create safe and secure AI that operates without unconscious bias. Looking to the future, LoCascio sees tremendous opportunity for AI in health care, saving lives and meeting human needs with advanced technology.

Another Loyola graduate who’s optimistic about AI’s possibilities is Jessica Wade, who received her M.S. degree in Data Science in 2023. She’s currently working in information systems for Google and seeking a full-time role in the data science field.

“I truly desire to be the change we wish to see in the world, and by taking retroactive data to predict future trends, lives could possibly be saved,” she explains. And while she’s enthusiastic about the possibilities of AI, she agrees that it will never replace human emotional intelligence. “Ethical conduct and a moral compass are necessary in the field of data science and AI.”

Illustration of a lightbulb with a cloud connected to a compass by computer circuits

AI in the Classroom

At Loyola, ethics are top priority. Rather than worrying about students using AI to write their next paper, however, Dobin Yim, Ph.D., assistant professor of information systems, is embracing the new technology.

“I am incorporating ChatGPT as much as possible this fall semester and making it a requirement for students to use,” Yim says. “I don’t want to just teach students how to memorize things. I want to teach them how to use the knowledge and technology.”

We will depend on (AI) in the future, and we won’t be able to do our required daily jobs without it.

—Paul Tallon, Ph.D., professor of information systems and chair of the Information Systems, Law, and Operations department

Yim believes ChatGPT could change how higher education professors dispense and assess knowledge. Instead of making predictions about where this technology might go in years to come, however, the answers could be in the past. In the 1960s, for example, inquiry-based learning became popular when discovery learning models emerged. The learning style focuses on educating students by posing questions or scenarios.

“By learning how to transition from basic inquiries to constructing coherent series of structured questions, students will naturally develop a deeper interest and expertise in the subject matter, eventually leading them to create their own comprehensive sets of questions and answers,” Yim says.

Yim is not alone in his ideology about AI in higher education.

An Everyday Classroom Tool

The Society of Jesus has long been open to embracing new tools that allow for the transformational education they’ve been offering for centuries. There’s a reason that there are 34 craters on the moon named for Jesuits.

Paul Tallon, Ph.D., professor of information systems and chair of the Information Systems, Law, and Operations department in Sellinger, agrees that leaning into ChatGPT and generative AI will benefit everyone as it becomes an everyday tool.

“If we’re not teaching AI, then we’re doing Loyola students a disservice because it touches every discipline,” Tallon says. “We will depend on it in the future, and we won’t be able to do our required daily jobs without it.”

To help preserve academic integrity in the classroom, Tallon uses AI to create assignments that AI itself would not be able to answer. He views it as a teaching tool that exists to help students, not hinder them.

“Similar to calculators in today’s society, just because you have one in math class doesn’t mean you have all the answers. Students still need to use the correct prompts to get the right answers,” Tallon explains.

Greyhound Iggy mascot standing in front of a Loyola flag Greyhound Iggy mascot standing in front of an AI-generated beach wearing an AI-generated hat
Left: Original photograph of Iggy at an accepted student open house. Right: This issue’s cover photo and the photo of Iggy at the beach were created using Photoshop Beta, which allows users to test AI Generative Fill tools. The designer was able to select portions of the photo and enter text prompts (e.g. “blue sky” and “fountain”) to change the photo. As you see, Iggy doesn't even need to leave the Evergreen campus to find himself enjoying a sunny day on the beach.

Navigating AI Challenges

While AI and ChatGPT have received praise from many in higher education, some say student accessibility might need to be addressed. Janine Holc, Ph.D., professor of political science, feels AI is too easy for students to use.

“AI is an open platform—easily accessible on any device—and generates an answer almost immediately,” Holc said. “AI is not a tool for the unprepared student—but even prepared students are defaulting to it.” She plans to ask students to put their devices away and do more in-class writing, individually and in pairs or groups, using pens and pencils.

“The issue I am really concerned about is the confidence students have in their own processes and what they come up with on their own,” Holc says. “AI is displacing a student’s own voice and process of developing their own voice. This is happening even when students are engaged and eager to learn.”

That commitment to helping students find their own voice and being able to think critically and communicate clearly is an essential component of a Jesuit, liberal arts education.

In the future, identifying ourselves by what we know is going to become outdated, we’ll have to identify ourselves as continuously learning. We’ve always thought about education and learning as transferring knowledge, but knowledge without meaning is no use.

—Joseph Ganem, Ph.D., professor of physics and department chair

Masudul Biswas, Ph.D., agrees that AI cannot replace writing and editing skills or critical thinking. The professor of communication and department chair says it can be used to generate ideas quickly, but a human should verify the content before submitting anywhere.

Greg Hoplamazian, Ph.D., associate professor of communication, seconds that generated content is imperfect.

“Being aware of legal and ethical uses of the content that is generated as well as common errors that tend to get made are important as you use these tools,” Hoplamazian says.

What’s Next?

Being cautious and aware doesn’t prevent embracing new technology, however. At the Loyola/Notre Dame Library, Library Director Katy O’Neill says she and colleagues in libraries across the country have started using AI to assist with processes like grant application development.

“Libraries have been adapting to emerging technologies for decades to bring information and access to technologies to users with a commitment to doing so anchored in professional values around privacy, accessibility, intellectual freedom, and lifelong learning,” O’Neill says.

Artificial intelligence could very well change our understanding of learning and knowledge. Joseph Ganem, Ph.D., professor of physics and department chair, who wrote a book, Understanding the Impact of Machine Learning on Labor and Education, sees that AI is changing how people around the world view education.

“In the future, identifying ourselves by what we know is going to become outdated, we’ll have to identify ourselves as continuously learning,” Ganem says. “We’ve always thought about education and learning as transferring knowledge, but knowledge without meaning is no use.”

Illustration of the Loyola shield with a brain with computer circuits branching out of it

The Human Element

One Greyhound who has been a bit slower to adopt artificial intelligence is Peter V. Stanton, CEO of Stanton Communications, who handles public relations, crisis management, and more for the leaders of major corporations, nonprofits, and industry associations.

Stanton—who received his B.A. in Psychology from Loyola in 1974 and his M.S. in Counseling Psychology in 1979—calls himself “something of a technology Luddite,” but does see members of his firm starting to lean into the technology.

“My far more tech-savvy colleagues tell me AI will greatly benefit their ability to conduct research related to client interests and industries, translate into lay English complex issues and technologies, and stimulate thinking about how we may articulate messages and positions of importance for our clients.”

A lot of Stanton’s work involves writing, from messaging documents to byline articles for his clients—and he emphasizes the importance of not depending too much on text-based AI programs like ChatGPT, which can veer toward plagiarism. He also notes that artificial intelligence can’t provide the critical thinking and interpretation services that his firm specializes in.

“We personally engage with subject-matter experts, people who have dedicated their careers to a focus on a specific field of inquiry or practice. We talk to them. We listen. We learn constantly. With their wisdom and perspectives, we are not only able to advise our clients—we are able to explain why our advice is supportable and prudent,” he explains. “I am unaware that AI can provide this uniquely human intelligence.”

He continues, “I would say we are in the business of the application of genuine intelligence. The other is called ‘artificial’ for a reason.”

Meanwhile, Todd Marks, who graduated in 1998 with a degree in mathematics and currently serves as president and CEO of digital agency Mindgrub, spends a significant portion of his time focused on AI, Cloud, and AR/VR services.Unlike Stanton, he’s excited about AI’s ability to automate content—from Requests for Proposals to marketing communications—along with website chatbots, e-commerce recommendation engines, and custom data search.

“Internally, we are using AI for our own marketing as well as for paid programming,” he adds. “We love the intersection of human capital and AI.”

But Marks also advises AI adopters to be cautious. “AI occasionally exhibits hallucinatory behavior, generating fictional content that it perceives as factual,” he notes, adding: “AI still faces challenges stemming from flawed data inputs, lack of transparency, and certain security risks, particularly in sectors such as transportation and health care.”

Overall, though, he’s excited about the advancements—as long as the human element remains—especially as he reflects on his Jesuit, liberal arts education and the emphasis on caring for each individual, and the community at large.

“I believe Loyola alumni have a love for humanity and will serve as good shepherds for preserving that humanity as the world becomes more automated and digital.”