Building Responsible AI for 2023 and Beyond

by Pranav Ramesh
January 10, 2023
Responsible AI - The future of 2023

In 1950, Alan Turing released a paper, Computing Machinery and Intelligence, in which he asked a simple question – “Can machines think?” Turing was conceptualizing computers that could learn and exhibit intelligence – Artificial Intelligence (AI) – though that term wouldn’t be coined until a few years after his death in 1954. Since those early days of computing, AI has always promised to be “the next big thing.”  

Today, that is no longer the case because AI (specific automatization via machine learning) is already here, and it is everywhere.  

As of the beginning of 2023, the global AI market is estimated to be valued at almost $140 billion. By 2030, that number will be 13x as high at nearly 2 trillion dollars. In 2020, MIT found that the vast majority of organizations, 87% to be exact, believe AI will give them a competitive edge.  

Those organizations may need AI just to keep up. Global AI adoption is set to expand at a compound annual growth rate of nearly 40%, with the World Economic Forum expecting nearly 100 million people to work in the AI space by 2025. Indeed, Forbes reports that 83% of companies view AI as a top strategic priority.  

But the continuing ubiquity of AI has not been without controversy, with bias being chief among them. AI bias has already been seen in sectors like healthcare, the judicial system, and marketing. And, because a lot of industries have turned to AI to help with hiring, bias has also been found there as well 

This has led many to call for more “Responsible AI,” or “an approach to developing, assessing, and deploying AI systems in a safe, trustworthy, and ethical way,” according to Microsoft. But what exactly is Responsible AI and how can you start building more ethical systems? 

What constitutes responsible AI? 

The potential for (and sometimes actual) bias, misuse, or abuse of AI by governments, businesses, and other organizations, whether done intentionally or not, dictates a clear need for Responsible AI. However, what that looks like and how it is implemented is far less certain. That’s because there is an ongoing debate among governments and industry leaders (as well as ethicists and philosophers) on what Responsible AI entails, and what standards and best practices are needed for its realization.  

The primary problem seems to lie in the subjectivity of the term “ethical,” according to Noelle Silver Russell, Partner of AI and Analytics at IBM. “It is less about what you perceive as right or wrong, and more about how you’re going to be held accountable for the outcomes of the things you build.”  

Accountability is also echoed by Dr. Melvin Greer, Chief Data Scientist at Intel, but more important is whether the system does what is claimed. According to Dr. Greer, Responsible AI should “focus on successfully managing the risks of AI bias, so that we create not only a system that is doing something that is claimed but doing something in the context of a broader perspective that recognizes societal norms and morals.” 

Despite the ambiguity, some commonalities on what constitutes Responsible AI have emerged. A study published in Nature Machine Intelligence found the following pillars of Responsible AI.  

 

RELATED POST: Gen AI in 2023: A Round-up

 

The Principles of Responsible AI 

 

Transparency and Responsibility 

“The more we are going to apply AI in business and society, the more it will impact people in their daily lives…this calls for high levels of transparency and responsibility.” Stefan van Duin, Partner of Analytics and Cognitive at Deloitte 

One of the purposes of AI is to make decisions that used to be made by real humans. But unlike humans, the decision an AI makes is based on programmed algorithms and not human judgment. Transparency means that real humans (stakeholders, organization leaders, and most importantly end users) ought to be able to understand why an AI system makes the decisions it does.  

Likewise, just because we have shifted decision-making to AI does not mean responsibility for those decisions has likewise shifted. There ought to always be a human element. People must maintain responsibility for and meaningful control over AI systems.  

 

Justice and Fairness 

We all have inherent biases, whether we realize them or not, that affects how we think and the decisions we make. One of the promises of AI is the elimination of bias in decision-making. However, that hasn’t been the case. AI learns from real-world data, and that data most likely has bias already built in.  

So how do you make a more just AI system? Vigilance. It may be impossible to eliminate all bias from machine learning, but we can recognize the bias and correct it as soon as possible. There are tools available, such as IBM’s AI Fairness 360, Google’s What-If, or Microsoft’s Fairlearn, that are specifically designed to make AI systems fairer.  

 

Non-Maleficence 

Even before Turing asked the question of whether machines can think, science fiction author Isaac Asimov had already assumed they would and speculated about a world in which they existed. In his 1942 short story, Runaround, he gave three rules for sentient machines, the first being that they should not harm humans.  

He was speaking about Non-Maleficence, which means “do no harm.” 

It’s a principle that applies to nearly every area of ethics or human interaction. Those who build AI systems must not intentionally design those systems to do harm to others. Moreover, a continuing obligation to correct the system if harm is discovered. 

 

Privacy 

The final principle of responsible AI is one that every company is mindful of, regardless of whether or not they’re employing AI. Privacy is a paramount concern while building Responsible AI, as the technology already collects massive amounts of data. There must be clear guidelines for how this data is being used and protected from cybercrime. Privacy should be a top concern across the healthcare industry- or any industry that collects and stores patient-sensitive data.  

 

Conclusion 

While the above principles can be helpful to companies looking to utilize AI responsibly, keeping up with quickly changing technology can become frustrating. Companies serious about Responsible AI need to devote serious resources to the technology and expect to make significant future investments as well.  

Responsible AI demonstrates even more clearly why building diverse and inclusive teams of employees is so important. Employees of diverse cultures, races, genders, and backgrounds will help ensure the AI you construct is more effective, ethical, and responsible. 

 

Are you looking for a job in Information Technology?

See all of our current openings here!


About the Company:

Peterson Technology Partners (PTP) has partnered with some of the biggest Fortune brands to offer excellence of service and best-in-class team building for the last 25 years.

PTP’s diverse and global team of recruiting, consulting, and project development experts specialize in a variety of IT competencies which include:

  • Cybersecurity
  • DevOps
  • Cloud Computing
  • Data Science
  • AI/ML
  • Salesforce Optimization
  • VR/AR

Peterson Technology Partners is an equal opportunities employer. As an industry leader in IT consulting and recruitment, specializing in diversity hiring, we aim to help our clients build equitable workplaces.

 

26+ Years in IT Placements & Staffing Solutions

Illinois

1030 W Higgins Rd, Suite 230
Park Ridge, IL 60068

Texas

222 West Las Colinas Blvd.,
Suite 1650, Irving, Texas, 75039

Mexico

Av. de las Américas #1586 Country Club,
Guadalajara, Jalisco, Mexico, 44610

Brazil

8th floor, 90, Dolorez Alcaraz Caldas Ave.,
Belas Beach, Porto Alegre, Rio Grande do Sul
Brazil, 90110-180

Argentina

240 Ing. Buttystreet, 5th floor Buenos Aires,
Argentina, B1001AFB

Hyderabad

08th Floor, SLN Terminus, Survey No. 133, Beside Botanical Gardens,
Gachibowli, Hyderabad, Telangana, 500032, India

Gurgaon

16th Floor, Tower-9A, Cyber City, DLF City Phase II,
Gurgaon, Haryana, 122002, India

Work with us
Please enable JavaScript in your browser to complete this form.
*By submitting this form you agree to receiving marketing & services related communication via email, phone, text messages or WhatsApp. Please read our Privacy Policy and Terms & Conditions for more details.

Subscribe to the PTP Report

Be notified when new articles are published. Receive IT industry insights, recruitment trends, and leadership perspectives directly in your inbox.  

By submitting this form you agree to receiving Marketing & services related communication via email, phone, text messages or WhatsApp. Please read our Privacy Policy and Terms & Conditions for more details.

Unlock our expertise

If you're looking for a partner to help build talent management solutions, get in touch!

Please enable JavaScript in your browser to complete this form.
*By submitting this form you agree to receiving marketing & services related communication via email, phone, text messages or WhatsApp. Please read our Privacy Policy and Terms & Conditions for more details.
Global Popup