You are now in the main content area

Episode 21: The ethics of AI technology and business

AI-podcast-thumbnail

In March of this year, over a 1,000 technology leaders and researchers signed an open letter, urging artificial intelligence labs and researchers to pause their efforts in training AI systems stronger than ChatGPT. Included in signing the letter were Elon Musk, CEO of SpaceX, Tesla & Twitter; Steve Wozniak, a co-founder of Apple; and Rachel Bronson, the president of the Bulletin of the Atomic Scientists, which sets the Doomsday Clock. 

The suggested pause is in an effort to create safety protocols for AI systems as the technology progresses into new territories. Experts worry that without a pause, the systems in place could be mishandled, resulting in a spread of disinformation at a speed that has never been possible before. 

On this episode of Like Nobody’s Business, we’ll answer questions like: When looking at ChatGPT and AI technology through a business lens– what ethics are necessary to consider? How can they get ahead of situations with the potential for misuse of the technology? What should businesses keep in mind when applying this technology to systems and processes, and what touchpoints should they be relying on to ensure it’s been executed ethically?

We’ll speak with Dr. Chris MacDonald, an associate professor at TRSM, Director of the MBA program and speaker and consultant on ethics. He explains what aspects of ChatGPT and AI technology businesses should be aware of and taking into consideration.

Podcast Transcript - Episode 21

ChatGPT Welcome to Like Nobody's Business. Today we'll delve into ethical issues like fairness, transparency, accountability, privacy bias, and the human AI relationship. Artificial intelligence has evolved by leaps and bounds, transforming the way we live, work, and interact with the world. However, as its capabilities expand, so do the ethical questions surrounding its use. Our goal is to demystify AI ethics and business and provide you with the knowledge to make responsible decisions. We'll discuss the impact of ChatGPT. Technology on communication and decision making in organizations will address concerns like biases, data privacy, and the effect on human workers. Whether you're a business professional or simply curious about AI ethics, this episode will equip you with the tools to navigate AI ethics in business. Join us as we explore the ethical boundaries of AI fostering innovation and fairness.
Cassandra Earle The intro you just heard was written by ChatGPT with some audio tricks to make it sound like a robot. Of course, it's easy to identify AI when it sounds like something artificial or engineered, but we've entered a new era where that distinction can be harder to make. If you read the headlines, some say that AI technology will lead to great innovations, and others say it should be treated as a global threat. But good or bad, AI is here now, and some form of it, regulated or not, is here to stay. So when it comes to the ethics of using AI, what should businesses do? I'm your real life human host, Cassandra Earle, and this is Like Nobody's Business.
Cassandra Earle In March of this year, over a thousand technology leaders and researchers signed an open letter urging artificial intelligence labs and researchers to pause their efforts in training AI systems stronger than Chat GPT. Included in signing the letter were Elon Musk, CEO of SpaceX, Tesla, and Twitter. Steve Wozniak, a co-founder of Apple and Rachel Bronson, the president of the Bulletin of the Atomic Scientists, which sets the Doomsday Clock. The suggested pause is in an effort to create safety protocols for AI systems. As the technology progresses into new territories, experts worry that without a pause, the systems in place could be mishandled, resulting in a spread of disinformation at a speed that has never been possible before. OpenAI, which owns ChatGPT, conducted tests with researchers about the dangerous ways to misuse the technology, which resulted in the AI describing how to buy illegal firearms online, as well as how to make dangerous substances from household items. 
Cassandra Earle The company has since made changes to the technology and the system is no longer able to do these things. When looking at ChatGPT and AI technology through a business lens, what ethics are necessary to consider? How can they get ahead of situations with the potential for misuse of the technology? What should businesses keep in mind when applying this technology to their systems and processes? And what touchpoints should they be relying on to ensure it's been executed ethically? We'll speak with Dr. Chris McDonald, an associate professor at TRSM, director of the MBA program and speaker and consultant on ethics. He explains what aspects of ChatGPT and AI technology businesses should be aware of, and taking into consideration the transition into an AI world is a popular topic of conversation in every industry. What will it look like? How will it impact workflow and process? What will this mean for the success of individuals and companies? I asked Professor McDonald what his thoughts were on how businesses can navigate this change as seamlessly and effectively as possible. 
Dr. Chris MacDonald So it's a good question, but one that I'm not sure I have much of an answer for. I mean, and frankly, I kind of doubt anyone does. In fact, it's kind of an open question, whether, you know, 20 years from now, anyone will even remember the excitement and concern about ChatGPT that was in the air about ChatGPT and other generative AI applications way back in 2023, or whether AI will just be part of the water we're all swimming in by then without even thinking about it. But, you know, in terms of businesses, I think there, there's probably no secret sauce, except to be thoughtful and be open to discussion. The thing I always tell to students in my ethics class is when after the fact, someone comes to you and says, "Hey, why did you do that?"
Dr. Chris MacDonald "Why did you implement this technology that way? Or why did you structure your operation that way?" The worst answer you can have is, gee, I never thought about it. It's your job to think about it. It's your job to be thoughtful about it. And so if you're open to engaging with, you know, concerned stakeholders, whether that's employees or customers or investors or, you know, or social groups or what people call civil society, as long as you're open to discussion and open to taking discussion with concerned folks, seriously, then it's unlikely you're gonna miss something in a way that's gonna lead you to deeply regret and be unable to explain what your actions were after the fact.
Cassandra Earle Right now, businesses are trying out different ways to implement AI from chatroom representatives during online shopping, to streamlining email requests from customers. Professor McDonald explains the ways in which businesses might continue to implement AI and what customers, employees, and stakeholders can expect in the future. 
Dr. Chris MacDonald Some uses are gonna be very specific to particular kinds of businesses. For example, Amazon and other online retailers can use AI to help tailor product offerings to specific customers and are probably doing so in, in more and more sophisticated ways. So some uses are gonna be grounded in what the business actually does. But we can see AI in all kinds of places in all kinds of businesses to some extent irrespective of the particular lines of business a company is in. So for example, in the human resources functions at all kinds of companies. So starting with the use of AI and the hiring process, using it to filter big piles of job applicants through to monitoring and evaluating employees as they proceed through their careers. 
Cassandra Earle What's important to consider about the differences between implementing AI technology at a larger company like a finance firm in comparison to a smaller, individually owned business? What challenges might present themselves? 
Dr. Chris MacDonald I think it's hard to come up with a general answer to that because small companies vary so much depending on, you know, what line of business they're in, whether they're a small hair salon or a small provider of, you know, business to business provider of, you know, software or, software as a service or something like that. But one thing I think, one general worry, is that startups and other small companies may be so focused on surviving or growing, depending on what stage they're at, that they may not feel they have the luxury of slowing down when thinking about the kinds of issues we've been talking about proper ethical implementation of AI. And they may not have the staff to do it. So a big company like Microsoft or Google can say, oh, well, we gotta think about these issues. 
Dr. Chris MacDonald Let's hire an entire team of people. Let's hire experts from the universities. Let's hire people with the relevant know-how and insight to think about these problems and to advise us and to develop policies. So big companies, you know, I was speaking a couple of weeks ago with a person who is in charge of AI policy for IBM-- that's a job title. At a place like IBM, you can have a head of AI policy and that person can have a staff and so on. But a startup with a whole, with whose entire workforce is five or 10 people, isn't gonna have the people to do that kind of thing, and they likely won't have the cash to even consider hiring such people. So I think it's a really, it's a very particular and interesting kind of challenge for the smaller organizations just having the capacity to engage in these issues. 
Cassandra Earle

Some employees are using ChatGPT and AI technology to write difficult emails or to help them ask for promotions or raises. Is it possible that this will become more normalized moving forward, and is it something that business owners and managers should prepare for? 

 

Dr. Chris MacDonald Well, ChatGPT isn't an expert on tough conversations or an expert on anything really. It's a very general purpose tool. And so, but it can probably do a pretty decent job of summarizing some well widely held views on the topic and, and giving some an overview of common tips. I think the more interesting thing would be a related, but a more specialized tool trained specifically to help people do better at such those kinds of conversations. That's something that I actually might do a lot of good if people resist the urge to let the, as long, as long as people sort of resist the urge to let the AI write the email for them, right? You don't want the AI to write the email to your boss. You want to use the AI more like a coach. If it can coach you to do better at those kinds of conversations, then, you know, I can see communications improving within organizations and that could only be a good thing. 
Cassandra Earle So we understand that AI is making its way into business, and we understand in which areas, but is this ethical? Is it morally right to be using AI in areas that humans used to occupy? Are there ethical issues behind using AI technology in business? 
Dr. Chris MacDonald Sure. I think there are plenty, but we have to remember that simply identifying something as an ethical issue doesn't mean it's a bad thing. It means that these are things that matter from a human point of view, and that they might be very good things or very bad things, but they're always things we want to, that warrant examination and discussion. So everything from privacy issues through to intellectual property issues, through to the use of AI and sophisticated sales efforts, changes in employment patterns, racial or other biases built into the language models that an AI is based on. Or the use of use or misuse of AI by employees in completing tasks or completing, uh, employment-based testing or evaluation, sort of the corporate equivalent of students using ChatGPT to plagiarize their essay. All of these are worries, but just, or are ethical issues, but just listing issues is pretty far from reaching anything like a verdict on any one of them, let alone a verdict about the technology overall. 
Cassandra Earle

These ethical issues are being weighed by experts and considered by business managers and owners, but other guidelines they can follow. There isn't a rule book on how to use AI ethically, but should there be? Professor McDonald explains what businesses and managers can do to ensure they're considering all the ethics involved, even without a written rule book.

Dr. Chris MacDonald I don't think there's anything authoritative, but that's sort of the nature of ethics is that it's kind of a grand social discussion. Certainly there are people with expertise and insights. So there are those of us who teach ethics for a living, for example, and there are those who understand that technology really well. Unfortunately, often those two bodies of knowledge are very separate. And so it means that for any really new technology, it either takes a while for the technology folks to learn a bunch about ethical and social implications, or it takes the ethicists and others who have an interest in evaluating technologies, takes them a while to learn about the technical details in order to say something reasonable. But you know, a lot has been and is being written about the ethics of technology in general and the ethics of AI in particular. And so I think, you know, it makes sense for companies to be reading up. I mean, not companies, but the managers and boards to be reading up and engaging in discussions, including discussions with their own employees, including frontline employees, because they may well have insight into the impact on customers that people higher up in the organization don't have. 
Cassandra  Earle And this impact on customers is also an important aspect to this conversation. Professor McDonald discusses which aspects of AI technology businesses may feel responsible to disclose, and which parts may be the responsibility of the consumer. 
Dr. Chris MacDonald Well, you know, disclosure is almost always a good thing. So letting people know what you're doing, but it's not always clear how much good that does. So,, for example, knowing that Amazon is using a fancy algorithm, whether that counts as AI or not is not always clear. But knowing that they're using a fancy algorithm to try to sell me things doesn't necessarily help me much to understand what's really going on, or whether I need to take steps to defend myself or, how to do that. So, we all need to become more sophisticated consumers to some extent. You know, we all need to understand that AI is being used in a range of ways that the online chat function at your phone company, for example may or may not be an AI that you're talking to. 
Dr. Chris MacDonald

So we all need to understand what the possibilities are, but even that is only a partial step because there are going to be more and less sophisticated consumers, including in some cases children, because in some cases, children are consumers, or people who have cognitive impairments of various kinds or our elders who, you know, in some cases in later years become less cognitively capable and less in touch with technology. And so they may be especially susceptible to failing to understand what is going on when they're interacting with a company that is using an AI in some way that may or may not be aimed at helping the consumer. 

Cassandra Earle The generational divide is something to consider in the social response to artificial intelligence, how the older population in comparison to the younger may accept and welcome this technology could be vastly different. The same thing goes for people who work in positions that can't be replaced with AI technology in comparison to those who are concerned about being replaced. What does this mean for businesses and individuals? On a social level? 
Dr. Chris MacDonald It may be too soon to tell from kind of a broad social point of view, and it's likely to perceived very differently by different groups, in part, depending on age. In some businesses, for example, employees might be very alarmed because they may be worried that AI is gonna be taking over tasks that they've been paid to do up till now. Investors on the hand, on the other hand, might see dollar signs and be quite excited. Customers-- it may depend a bit on generation. And other people have commented on this...it's too easy and probably a mistake to do it just based on generation, because there's a lot of times, people were referring to people the age of my students as kind of digital natives, people who were brought up digital people who have had an email account and a mobile phone since they were old enough to use one. 
Dr. Chris MacDonald

But what we're realizing is that that doesn't always mean real sophistication about digital things. So it doesn't mean that, just because my average student has had a phone and an email account since they were eight or something, that doesn't mean they understand how ChatGPT might be used and abused. It doesn't mean that they're gonna be sophisticated about the ways companies in particular are using the technology. And so I think it's something that we all need to work on and try our best not to say, well, you know, "well once, once the, you know, once the oldest generation sort of pass along, everybody who's alive now is gonna be super comfortable with the technology." Well, my students are super comfortable with email, but they're not as super comfortable with artificial intelligence 

Cassandra Earle

As previously mentioned, ChatGPT has also proven it can be misused as users found the technology was able to teach them how to commit dangerous and illegal acts. Although founders ensure that those functions of ChatGPT have been removed, other potential for misuse in AI surely exists. Professor McDonald explains what this could mean for business. 

Dr. Chris MacDonald I think there are really kind of two different categories of misuse that we need to think about. One is what we might call intentional misuse. So when a company hypothetically uses AI to outsmart regulatory systems or to bamboozle customers into buying something that won't meet their needs or to intentionally hire in a biased way without getting caught or to provide customers with services that ought to be illegal. But the other, so those are intentional use misuses, but the other I think is unintentional misuse as when a company or an employee uses AI in a way that is harmful just because they either don't understand the technology very well, and we've seen some cases of that, or they don't understand the implications of that specific way of using it. And I really suspect that this latter core category is the one is gonna be much more common. I mean, however much of each of these two behaviors goes on, I think unintentional misuse is gonna be far more common just given how complex the technology is. People are just gonna accidentally misuse it in all kinds of ways. That's the one I worry about.
Cassandra Earle After all this talk about artificial intelligence, how it's being used, in what ways it might be implemented in the future, how it can be misused...I was curious to know what keeps an expert up at night, worried about ChatGPT and AI technology. Professor McDonald shared his thoughts with me. 
Dr. Chris MacDonald I think there are kind of three things that would keep me up at night. First is I'd be worried about the possibility that my employees might be using AI in unauthorized ways to take shortcuts, possibly in the name of efficiency or doing their jobs better, but in ways that get us into trouble. Second, and this one's gonna depend a lot on what business you're in, but in some cases, the worry will be that someone is gonna find a way to use AI to eat your lunch. In other words, to pull your market share out from under you. Some companies are simply gonna die, I think, because of AI. But most generally, I'd worry about the pace of change and the development and implementation of AI in particular, generative AI and in particular applications based on large language models such as ChatGPT, when whatever I worry about tonight is liable to be obsolete a week from now. And so it it's gonna be a running series of worries that I can't even predict what I'm gonna be worried about next week. 
Dr. Chris MacDonald And as for the future of AI and business, professor McDonald thinks people should get used to the idea.
Dr. Chris MacDonald

I think the evidence is that the technology is becoming so widely available, so cheap and so effective that, yeah, lots of companies just are going to be using it. And in some cases you'll care, and some cases you won't care. In some cases, as long as if the chat bot on my phone company's website is fast, efficient, answers questions, has answers at its fingertips that a human employee would take longer to answer, I might be perfectly happy with dealing with an AI. And in other cases, I might just be incredibly frustrated by its inability to detect the nuance of my questions. And in those cases, I might really want to know that it's an AI and then realize that I've got the option of saying, "let me talk to a human please ."

Cassandra Earle Like Nobody's Business is a presentation of Toronto Metropolitan University's Ted Rogers School of Management. For more information, visit Toronto mu.ca/tedrogersschool. Thank you for listening.