Essay Writer » Essay Blog » Business Essays Help » Too Intelligent?

Too Intelligent?

 

Ethics Guide

MIS-diagno

 

Fred Bolton stared at his computer screen until his eyes glazed over. He had been working 15-hour days for the past week trying to solve a serious problem that could have a devastating impact on the future of his employer. Fred had worked at A+Meds for almost a decade, and he was proud to be affiliated with a world-leading pharmaceutical company. He started out at the bottom of the IT department but had moved up quickly. He was a fast learner with a nevergive-up attitude. But today he was on the verge of giving up.

Fred was astounded at how much the pharmaceutical industry had changed over the previous 10 years. When he first started at A+Meds, the company could drive up sales using direct marketing techniques. Doctors met with company representatives who convinced them that A+Meds were the best on the market. Now, technology had started to permeate every aspect of the healthcare industry. Doctors were relying more and more on artificial intelligence (AI)-driven expert systems to select the most appropriate medications and treatments. These systems made recommendations based on drug profiles submitted to the system by pharmaceutical companies. The companies could update the drug profiles if any aspect of the drug changed, but this didn’t happen very often if the changes were minor.

Recently, the sales of a new drug had been underperforming. A+Meds had invested tens of millions of dollars in developing it. Company executives and new product developers were convinced that the product was superior to competing drugs. The problem, they believed, was that the expert systems used by doctors were not recommending the product. Sales were suffering, profitability was down, and employee compensation was in jeopardy.

Fred had been tasked with doing a rigorous investigation of the AI recommendation system. He was supposed to identify the problem and see if there was something the companycould do to improve the system’s “perception” of the product. During his testing, Fred found that minor modifications to the drug’s profile made a big difference. But some of the numbers he used to modify the profile were not accurate. Even if they were, the changes he made would warrant a regulatory review, which could take an extensive amount of time. The financial damage to the company would be done long before the review was complete. Fred was not looking forward to reporting his findings.

Information Manipulation

Fred kept looking at the clock on his computer monitor. It was time for his meeting, but he was trying to find an excuse to linger at his desk. He came up empty handed and headed over to the boardroom. He took a seat at the end of a long conference table and joined Patricia Tanner, a high-level A+Meds sales executive. “So, Fred, what did you find out?” Patricia asked. “Good news, I hope!” Fred explained that, in spite of his extensive analysis of the recommendation system, he was unable to identify a solution that would cause the system to select their product over competing products— unless they tweaked the profile.

“But our drug is superior and safer!” she exclaimed. “I was a sales executive at our competitor when they were putting a similar drug through trials, and I know for a fact that our drug is the better choice.”

“That may be,” Fred replied cautiously, “but our profile is based on our current approval guidelines. The drug’s current profile is causing us to lose out to competing drugs.”

They both sat for a minute before Patricia slowly replied, “What if we submit a new profile that the system perceives as more favorable, even though some of the data was a bit of a stretch?”

Fred couldn’t believe she’d just asked that question. He wasn’t sure how to respond without putting his job in jeopardy. “Wouldn’t the addition of inaccurate information to the system be considered a violation? Wouldn’t we be liable if something happened to a patient who took our drug based on altered information?” Fred asked. Patricia replied that drug companies did stuff like this all of the time. Investigations were extremely rare and only occurred if there were numerous patient-related incidents of a serious nature.

Patricia looked over at him with a funny look on her face and said, “Do you think it is right to have people using what we know to be an inferior drug simply based on how this system interprets drug profiles? What if people get sick or if something more serious happens to them because they should have taken our drug but didn’t because of the system? Wouldn’t you feel bad about that?” Fred hadn’t thought about it like that. Maybe Patricia was right. Fred did believe their drug was the better choice. But he wasn’t a doctor. Adhering to federal regulations seemed like the right choice, but not at the risk of keeping people from the medication they should be getting. He let out a sigh and leaned back in his chair. He wasn’t sure what to say.

 

Discussion Questions

  1. According to the definitions of the ethics principles defined previously in this book:
  2. Do you think that manipulating the recommendation of an AI system even though the new recommendation may be for the better drug is ethical according to the categorical imperative (pages 23–24)?
  3. Do you think that manipulating the recommendation of an AI system even though the new recommendation may be for the better drug is ethical according to the utilitarian perspective (pages 60–61)?
  4. How would you respond if you were placed in Fred’s shoes? Do you think it is appropriate to submit inaccurate information because the drug may be better and safer than the competition?
  5. How should Fred handle the fact that Patricia made the suggestion to manipulate the drug’s profile? Is her willingness to use this type of tactic cause for concern in its own right?
  6. How do you feel about the growing use of AI and other technological solutions in helping people make decisions? Would you want a doctor treating you based on recommendations from an automated system? Consider other arenas as well. For example, would you trust the recommendation of an automated financial investment system over the advice of a human financial advisor?

Last Updated on February 11, 2019

Don`t copy text!
Scroll to Top