Illustrations by Stephanie Dalton Cowan

The Future of AI

Artificial intelligence is poised to transform society. How do we develop it safely?

When the company OpenAI released an artificial intelligence program called ChatGPT in 2022, it represented a drastic change in how we use technology. People could suddenly have a conversation with their computer that felt a lot like talking to another person, but that was just the beginning. AI promised to upend everything from how we write programming code and compose music to how we diagnose sick people and design new pharmaceutical remedies.

The possibilities were endless. AI was poised to transform humanity on a scale not seen since the Internet achieved wide-scale adoption three decades earlier. And like the dot-com craze before it, the AI gold rush has been dizzying. Tech companies have raced to offer us AI services, with massive corporations like Microsoft and Alphabet gobbling up smaller companies. And Wall Street investors have joined the frenzy. For instance, Nvidia, the company that makes about 80 percent of the high-performance computer chips used in AI, hit a market capitalization of $2 trillion in March, making it the third most valuable company on the planet.

But amid all this excitement, how can we make sure that AI is being developed in a responsible way? Is artificial intelligence a threat to our jobs, our creative selves, and maybe even our very existence? We put these questions to four members of the Boston College computer science department鈥攑rofessors William Griffith, Emily Prud鈥檋ommeaux, George Mohler, and Brian Smith鈥攁s well as Gina Helfrich 鈥03 of the University of Edinburgh鈥檚 Centre for Technomoral Futures, which studies the ethical implications of AI and other technologies.

This conversation has been lightly edited for clarity and length. Helfrich was interviewed separately, with her comments added into the conversation.听听


We constantly hear about the wonders of AI, but what questions should we be asking about it?听

William Griffith: If you think back to social media, it actually changed the way we operate and interact. I鈥檓 wondering how AI will possibly either extend that or go in a different direction. We should look at AI from many ethical perspectives, such as justice, responsibility, duty, and so on. My sense is that is the way to think about most of the challenges that confront us, not only technologically but socially听and environmentally.

Emily 笔谤耻诲鈥檋辞尘尘别补耻虫: One of the big issues is going to be authenticity. When media, images, language, or speech are created through artificial intelligence, it鈥檚 getting to the point where it鈥檚 so good that it鈥檚 difficult to know if that product was produced by a human or by artificial intelligence. That鈥檚 one听of the big things that people are struggling with right now鈥攈ow to educate people so that they can tell the difference, because it鈥檚 going to get more difficult.听

George Mohler: The question I find interesting is, is this an immediate existential threat or is that kind of overhyped? And if you look at the experts who invented this technology, they鈥檙e actually split. Some of them believe that in twenty years we could have artificial intelligence that鈥檚 smarter than humans. And then the other segment of AI researchers believe we鈥檙e very far from that.听

叠谤颈补苍听Smith: One of the first things that came out was the ethics of how people are behaving with these things. How will students, schools, teachers, faculty members deal with a machine that can essentially just do your homework? The problem is, people were going, 鈥淎I is this new thing, and we鈥檙e going to be scared of it.鈥 But the reality is, it鈥檚 really academic integrity that鈥檚 the issue. So there is kind of a value system around academic integrity that has to come in before we start thinking about the technical pieces of things.

笔谤耻诲鈥檋辞尘尘别补耻虫: I think most students are using ChatGPT to guide them. And I don鈥檛 think many students are wholesale copying text from ChatGPT and popping it in a Word document and submitting it to their class. But I have noticed that I can tell when something was written by ChatGPT because it sounds really dumb in some way. It sounds like it was written by a team of marketing executives.

So how do we promote academic integrity in the age of ChatGPT?听

Gina Helfrich 鈥03: I don鈥檛 know that professors and university leaders have a great answer yet. It鈥檚 all still so new. People are still being extraordinarily creative in the ways that they鈥檙e coming up with to use these tools. But the companies who created the tools didn鈥檛 have a clear vision of what they should be for in the first place. I don鈥檛 think that it鈥檚 helpful to assume that all students want to cheat on their essays. It鈥檚 more interesting to look at reasons that students choose to cheat or plagiarize, as opposed to singling out AI as somehow special. That being said, there鈥檚 this feeling that to stay on the cutting edge, universities should welcome the use of generative AI [which can be instructed by a person to create original pieces of writing, videos, images, etc.]. Yet, so much of what happens in the classroom is still left up to the individual instructor, and some instructors will say, 鈥淵eah, go to town, use generative AI. We don鈥檛 mind.鈥 And others will say, 鈥渁bsolutely not.鈥 It must be very interesting from a student point of view to have polar opposite expectations and experiences around these tools, and I genuinely don鈥檛 know how they鈥檙e navigating it. My sense is that university leaders are really scrambling to try to figure out what line they should take on these tools.

Rule

Photo illustration of William Griffith


William Griffith
Associate Professor of the Practice in the 蜜桃传媒 Computer Science Department

Griffith was previously associate director of the 蜜桃传媒 Computing Center and studies the ethics and mindful uses of technology. He is a licensed clinical psychologist.

Rule

How else is AI going to shape the development of our children?听

Griffith: How this technology will affect kids cognitively, emotionally, and in terms of their education is going to be a serious issue. You can invent personalities, you can invent things in more realistic ways than ever before, and kids will figure out how to use this technology. I have great concerns about the development of children and the presence of this software.

Of course, it鈥檚 not just higher ed. Corporate America, Wall Street, the military, and so many other sectors are also struggling with these questions. Should the government step in and regulate AI?

Mohler: There鈥檚 so many different types of AI that each type would have its own issues and avenues for regulation. For example, with chatbots like ChatGPT or Llama, the issue is more around copyright issues鈥攖hey are trained by using other people鈥檚 data鈥攁nd what to do about that. Some people have said, 鈥淥h, we should stop training those models.鈥 That doesn鈥檛 make sense to me. It makes sense for people and scientists to be able to investigate the models and then to figure out the copyright issues. On the other end of the spectrum, you have things like autonomous weapons for military use. That鈥檚 not going to be regulated by the US鈥攖here鈥檚 going to need to be some international treaties. Then there are technologies like autonomous vehicles or medical treatments that will need some sort of regulation.

笔谤耻诲鈥檋辞尘尘别补耻虫: I was recently reviewing papers for our main professional conference, and I read several that were proposing chatbots for mental health therapy. And for every single paper, there was one reviewer who was like, 鈥淚 think this is not necessarily an ethical application of AI, to replace a human with a machine for a vulnerable person who鈥檚 experiencing a mental health emergency.鈥 That鈥檚 something I can imagine being regulated relatively easily by the government. I鈥檓 teaching a criminal justice class right now, and one of the problems we鈥檙e looking at听is dealing with recidivism, and how do you predict that? Can a person do a better job at predicting whether someone will commit another crime when they are let out of prison? Can a computer do a better job with that? And that鈥檚 something I can imagine being regulated, too. But some of the things that they want to regulate are more complicated鈥攍ike, how do you force AI to not tell someone how to make a bomb if that鈥檚 what they request? There are all these things you can trick AI into doing for you and it will provide really good, accurate information. How is a company supposed to prevent those things from happening within their software? I think a lot of that kind of regulation would be very difficult to implement.

Helfrich: Historically, we鈥檝e seen when there are innovations of various kinds, it can take a while for the gears of government to catch up. But ultimately, I think the public does expect that the government will step in and make sure that things that are being advertised and sold to the public are not going to be grossly harmful. I think we鈥檙e getting to that point now where governments around the world are catching up to this big change in the past few years around AI and starting to institute some much-needed regulations. I鈥檓 sure it is ultimately going to be an iterative process. Maybe we鈥檒l have this first iteration of the regulations and we鈥檒l find the ways that it鈥檚 working and the ways that maybe it鈥檚 not working and come back and make changes so that it works better.听

Rule

Photo illustration of Gina Helfrich


Gina Helfrich 鈥03
Manager of the University of Edinburgh鈥檚 Centre for Technomoral Futures

Helfrich鈥檚 work is focused on the ethical implications of development in artificial intelligence, machine learning, and other data-driven technologies. A PhD, she is also the deputy chair of the University of Edinburgh鈥檚 AI and Data Ethics Advisory Board.

Rule

It鈥檚 been reported that AI has been used to select the targets of drone attacks. Who bears responsibility when AI makes mistakes during wartime?

Helfrich: The topic of who鈥檚 responsible is huge in thinking about ethical AI. The researcher Madeleine Clare Elish came up with the concept of the moral crumple zone. A crumple zone on a car is designed to take the impact in a crash, so that it protects the person and passengers in the vehicle. The moral crumple zone is essentially the nearest human who can be blamed for whatever is happening with regards to the computer. Keeping with the theme of cars, think about a car like a Tesla that is in a self-driving mode when it gets into a crash. We say this self-driving car crashed. Who should we hold responsible? Well, the person who put the car into the self-driving mode, right? That鈥檚 the nearest person that we can assign that responsibility to, so they鈥檙e in the moral crumple zone. It鈥檚 definitely something to be concerned about, because that can be a way of letting some of the companies that are pushing AI tools off the hook. At the same time, there are also decision makers in the organizations that use AI tools developed by tech companies. Those people also need to be held responsible and accountable for any mistakes. If we鈥檙e talking about a military use, for example, there has to be someone in the military brass who made the call to say, 鈥淲e鈥檙e going to delegate these targeting decisions to a machine.鈥 If the machine makes mistakes, who decided that the machine is the one that should make those choices? The question of collective accountability and responsibility around AI tools is something that we have to keep in mind, because they鈥檙e so complex, and because the process that goes into their development and deployment goes through many, many hands.

Griffith: Using AI in warfare has complex, multilevel ethical and political implications, ranging from the international to the individual level. When can AI make decisions autonomously, if at all, and when will human intervention be required?听 It also raises the question: Can a machine be programmed with human ethical decision-making ability? The challenge for policy makers is to develop well-thought-out legal and ethical standards that will be applied individually and internationally. People say, 鈥淲ell, it was the software that was the problem, and you can鈥檛 go after the programmers.鈥 I think that some of these programmers ought to be like licensed engineers, in the sense that you wouldn鈥檛 go on the Tappan Zee bridge if it was built by people who weren鈥檛 licensed engineers. The software industry needs to think about themselves similarly to the engineering profession when it comes to licensing. That鈥檚 maybe part of the responsibility, but there are famous cases where a medical device killed people because the hospital using it didn鈥檛 investigate it well enough, and the people using it weren鈥檛 trained well enough, and the people that designed it used software stopgaps instead of hardware. You couldn鈥檛 ultimately assign responsibility in those cases because there were six players in the game. So I鈥檓 not sure how we regulate that. That鈥檚 a difficult problem.听

Rule

Photo illustration of George Mohler


George Mohler
Daniel J. Fitzgerald Professor and 蜜桃传媒 Computer Science Department Chair

Mohler鈥檚 research focuses on statistical and deep-learning approaches to solving problems in spatial, urban, and network data science.

Rule

But what does it mean for us as humans to hand off decision-making to a machine?

Griffith: Certainly, it can make us lazier mentally and otherwise.

Smith: With some of these tools, you go and query something, and it鈥檒l just tell you stuff. Whereas, not that long ago, we would have to go to Google and get links, and then we would have to do a little bit of mental processing to make sense of the search results. Now you don鈥檛 even have to think about it. Context becomes really important. At what points does it make sense to use these things to gain some efficiency, to speed some things up, and hopefully not take away from our own ability? And then, of course, it also brings up the question of what is important to know鈥攎uch like search engines raised the questions of what鈥檚 important to know. I remember people saying, 鈥淥h, kids don鈥檛 know the dates of the Civil War anymore.鈥 Who cares? What really matters is, why was there a Civil War?

Griffith:听The Swiss psychologist Jean Piaget said you need a challenge to grow and develop your cognitive abilities. How do you get smarter if these technologies make everything easier?

What are some of the obstacles to international standards for responsible AI development?

Helfrich: Those efforts are already underway. There are many different principles that have been developed around responsible use of AI by all kinds of different organizations. But there鈥檚 a geopolitical struggle around the race for AI, like the US versus China. Those kinds of tensions lead away from a more unified international agreement. Colleagues of mine point out that we鈥檝e accomplished this for other things that everyone agreed were really important. There are international standards around airplanes, for example. So it could absolutely be the case that we might see something like that with regards to AI. And if we don鈥檛, then we can probably expect there to be differing AI regimes in different parts of the world. What鈥檚 expected with regards to AI in China might look somewhat different than the expectations in the US or in Europe.

As AI makes it easier and easier to generate authentic-looking imagery, how will we be able to trust anything we find online? Are we entering an unprecedented era of misinformation?

笔谤耻诲鈥檋辞尘尘别补耻虫: One of the challenges is it鈥檚 difficult for most people to tell the difference between something that was created by a computer and something that was created by a person. Tech companies are always going to be in a race to see who can get ahead of who in AI, but I feel like there鈥檚 another role they could take on, which is developing technologies that can help identify things that were created by a computer and then educating people about that. Maybe there鈥檚 more of a role for companies to be saying, 鈥淗ere鈥檚 an image. We think it鈥檚 not a real image. We think this image was artificially created.鈥

Griffith: It makes me think of raising children who are subjected to this technology, and how we will teach them to make these decisions and handle these creations that we鈥檙e leaving them as we pass on, and I鈥檓 not sure the educational system is up to that yet.

Helfrich: I think digital literacy is part of the solution, but it鈥檚 certainly not sufficient on its own. There are efforts to think about new ways of verifying the provenance of an image. But human beings can only be so vigilant. The first deepfake that I was genuinely taken in by was a viral image of the Pope wearing a designer Balenciaga coat. I just thought, Oh, cool jacket鈥攇ood for you. But the image was a fake. The reason that things like that fool people like myself is because we have no reason to be on alert or suspicious that a picture of the Pope in a jacket is something that isn鈥檛 actually accurate. And so I think that鈥檚 where malicious actors are really going to have the edge, because humans just don鈥檛 have the mental fortitude to be on alert for every single thing that we encounter and say, 鈥淚s this real? Is what I鈥檓 looking at a deepfake?鈥 It鈥檚 exhausting. You just can鈥檛 question your reality every moment of every day like that. And that contaminates our information environment, because we risk getting into this situation where the digital infrastructure that we鈥檝e come to rely on, like Internet search, becomes polluted by AI-generated content. We no longer know how to sift through what鈥檚 true from what鈥檚 false, because we鈥檙e used to being able to go into Google and get good information. But what happens when you go to Google and the top ten results are all AI-generated fluff?

Rule

Photo illustration of Emily Prud鈥檋ommeaux


Emily Prud鈥檋ommeaux
Gianinno Family Sesquicentennial Assistant Professor in the 蜜桃传媒 Computer Science Department

Prud鈥檋ommeaux鈥檚 areas of research include natural language processing and methods of applying computing technologies to health and accessibility issues, particularly in the areas of speech and linguistics.

Rule

The technology to replicate human voices is astonishingly accurate. We read about people being taken in by scammers imitating a loved one鈥檚 voice.

笔谤耻诲鈥檋辞尘尘别补耻虫: The technology for generating speech is actually really good. It used to be quite terrible and you could immediately tell if something was a synthetic voice. Now it鈥檚 getting much more difficult. I can鈥檛 even begin to figure out how you would stop that kind of scam from happening, but unfortunately, those kinds of scams are happening. Even without the help of artificial intelligence, people are being scammed all of the time over phone and Internet and text into sending money to places they shouldn鈥檛 send money to. I know educated people who have fallen victim to these kinds of scams. So I feel like while it is true that it鈥檚 very easy to impersonate someone鈥檚 voice now, it might be just a very small percentage of scams that are actually relying on that technology.

Helfrich: We might decide that artificial mimicking of human voices is too dangerous, and if it鈥檚 too dangerous, it鈥檚 off the table. Yes, maybe there are many ways that that could be useful. Maybe it could give a more robust voice for people who rely on technology for their own voice, like people who can鈥檛 speak with their vocal cords anymore. But maybe we decide that the benefit is outweighed by the harm of all the fraud and scams that are enabled by synthetic voices. It remains to be seen how these kinds of questions get addressed at the regulation level, but weighing benefits and harms is going to be a huge part of making those decisions.

AI is already allowing workers to offload some tasks to a computer. Isn鈥檛 there a risk that the technology could improve to the point where a human isn鈥檛 needed to do a job at all?

笔谤耻诲鈥檋辞尘尘别补耻虫: The actors and writers strike earlier this year was interesting. A lot of that had to do with artificial intelligence. Would studios replace writers听with something like ChatGPT? Can AI create footage of an actor giving a performance they never gave? I think that they were really ahead of the curve by striking when they did, because they recognized that automation, artificial intelligence, machine learning could potentially replace them. I don鈥檛 think it鈥檚 going to happen soon. We may be bumping up against some natural AI limits shortly. But I do think there鈥檚 the potential in other sectors for this same thing. Computer programmers are always worried that they鈥檙e going to be replaced by ChatGPT or Microsoft Copilot or whatever. And I can certainly see that as a possibility, but right now, if you ask ChatGPT to do a lot of coding things, it kind of gets it right, but then it makes stuff up and it gets stuff wrong. You definitely still need a human there to actually make it work and to integrate it into the system. So I can see it having an impact, but I don鈥檛 think it鈥檚 something that鈥檚 happening right now.

Helfrich: What we鈥檝e seen so far is that any company that has tried to wholesale replace human beings with AI has later had to backtrack. The AI just does not perform up to spec in a variety of contexts. Many of these workplace concerns are around replacing employees with generative AI tools, and those tools have no concept of what is true and what is false. They don鈥檛 have any sense of what it means to be accurate to the real world. So there is an inherent risk that generative AI tools will make some kind of meaningful mistake that will come back to bite the company that has employed them. A lot of these tools are not ready for prime time in that way, and the hype has perhaps prematurely convinced some companies that they are ready鈥攁nd these companies are reaping the consequences of those choices. Some kinds of work that people are used to doing will be handed off to AI tools, but in terms of AI operating all on its own to replace a person, that doesn鈥檛 seem feasible to me anywhere in the medium term, because this is an unsolved problem.

Rule

Photo illustration of Brian Smith


Brian Smith
Honorable David S. Nelson Chair and Associate Dean for Research at the Lynch School of Education and Human Development

Smith studies the design of computer-based learning environments, human-computer interaction, and computer science education. He also has an appointment with the Computer Science Department.

Rule

Human biases have been shown to influence everything from outcomes in the criminal justice system to hiring decisions in corporate America. Since humans are designing AI, how do we prevent human biases from making their way into these new technologies?

Griffith: I don鈥檛 think we鈥檒l ever get rid of bias. It鈥檚 always going to be present because cultures have different values. A bias doesn鈥檛 mean negative. But if it becomes a prejudice, then that鈥檚 when I start to think about how we have to govern it. How did the biased data get into these files in the first place? People must have asked questions, and the questions are biased in the beginning. They鈥檙e value-laden. Look at the biases that are causing prejudicial laws to be made, prejudicial hiring decisions to be made, and so on and so forth.

笔谤耻诲鈥檋辞尘尘别补耻虫: It鈥檚 not that the algorithms are biased or that the people who made them are prejudiced or whatever. It鈥檚 that the data they鈥檙e being built on has bias in it. And that may be a bias that exists in the world, or it may be a bias of individuals who are creating content. I actually had my students ask ChatGPT to create a bio for a computer science professor, and it was like, 鈥He did this. He did that. He has a degree from this place.鈥 And when I asked them to do it for an English professor, it was a 鈥渟he.鈥 For a nursing professor, it was 鈥渟he.鈥 For an engineering professor, it was 鈥渉e.鈥 Maybe ChatGPT is like, Well, this is the way it is in the world, so I鈥檓 going to predict the most likely thing. I think a lot of the bias is there in the data and trying to get rid of that is complicated. And a lot of those biases are not necessarily people being prejudiced. A lot of them are just reflecting the way the world is at certain times.

Mohler: With these models that are making decisions, we evaluate their accuracy for different groups of individuals. We can make explicit the models鈥 weaknesses. And then, because we can inspect the model, we can try to adjust the model to reduce bias. There鈥檚 a whole subfield of computer science that is trying to deal with issues around algorithmic fairness and bias. There are people out there trying to solve those problems. If an algorithm or a human is going to make a critical decision, probably both are biased. Is it possible that with an algorithm in the loop, we could make that decision less biased? I think the answer is yes.

Griffith: And why do these programs have to think the way we do? If they thought differently, would that be a positive? Could they investigate our biases?

Helfrich: It鈥檚 a huge difficulty. Right now, a lot of that AI training data comes from the Internet. That leads to the question: Well, who鈥檚 most well-represented on the internet? The English language, for example, is hugely overrepresented. So even听though having a diverse development team could be very helpful in improving problems with bias for AI tools, that is by no means enough, because the data that the AI tools are built upon themselves exhibit social biases. The digitally excluded are not part of the training data for AI tools. It鈥檚 a really difficult question.

It seems like every day we read another news story about a giant tech company buying up a new AI company. Is it a problem to have so few companies with so much control over this new technology?

笔谤耻诲鈥檋辞尘尘别补耻虫: They鈥檙e the ones that actually have the resources to be able to build these kinds of models. Something like ChatGPT or DALL-E鈥攁 university can鈥檛 really build that. We don鈥檛 have the resources to do that. The only people who can do that are these huge, huge companies with tons and tons of money and tons and tons of access to computing resources. So, until we can figure out how to make AI require fewer resources, it鈥檚 going to have to be them doing it. There is an effort through the National Science Foundation to create some sort of national artificial intelligence research resource that would pool computational resources for researchers in the US that might allow them to have similar resources to these companies.

Smith: I suppose the question is, even with the budget of the National Science Foundation, could you build something like a Google or a Nvidia? The amount of computing power is just so big. I talked to another group of universities who were thinking about whether they could in fact pool research: 鈥淲e don鈥檛 want to get left behind. How do we band together to build our own infrastructure to create models that are university-led?鈥 I looked at them, I was like, 鈥淲ell, this is an elite group. So if you guys did this, wouldn鈥檛 you effectively build the same problem? It would be the university elite as opposed to the corporate elite.鈥 There lies the problem. I said, 鈥淚鈥檒l tell you what, why don鈥檛 you add to your team? Some historically Black colleges and universities, a couple of minority-servicing institutions?鈥 And this was a panel. So they went, 鈥淩ight, I believe we鈥檙e out of time.鈥

A number of prominent AI researchers have signed on to a statement warning that artificial intelligence could lead to human extinction, and science fiction often portrays AI gaining some kind of sentience that leads to the development of a rival consciousness. How plausible are these scenarios?

Mohler: People should think about what AI technologies do well and what they currently don鈥檛 do well. AI can write a plausible college essay. But we don鈥檛 have artificial intelligence that can clean your house. I think the distinction there is important, because normally we would have thought, 鈥淲ell, writing a college essay is much harder than putting away the dishes in my kitchen.鈥 But in fact, we are pretty far away from having any kind of technology that could do that for us. ChatGPT can鈥檛 plan. It doesn鈥檛 reason in the way you might want it to. It鈥檚 just measuring correlations in text and then inputting missing text after that. I think there鈥檚 a lot of steps that would need to happen to have movie-level artificial intelligence in our lives, and it鈥檚 unclear how you would get to that level of technology.

Smith: Someone asked me, 鈥淲hat about HAL from 2001: A Space Odyssey and movies like that? And I was like, 鈥淪o it鈥檚 plausible because it happens in movies? Is there a non-fictional example that you can give me of machines trying to kill humans?鈥 And that person got upset, saying, 鈥淭hat鈥檚 not funny.鈥 I said, 鈥淣o, it is. Because you can鈥檛 give me an example of this happening.鈥 Mr. Coffee never decided one day, like, That鈥檚 it. We鈥檙e taking them down. Alexa didn鈥檛 say to the room, Trip them, knock them out, give them concussions. It doesn鈥檛 happen. It鈥檚 a weird thing to me that people would imagine, 鈥淥h, it鈥檚 the end of the world,鈥 when there are things that are happening right now in the world that we could actually be paying attention to that need attention, as opposed to thinking about the Roomba getting really mad and going, like, That鈥檚 it ...听鈼