The news about AI growth appears to be getting more alarming every day.
It started with the open letter in March by IT leaders and AI experts, which warned of impending catastrophe unless large-language and generative AI development was put on pause. This was followed up a few weeks earlier by OpenAI CEO Sam Altman’s testimony in front of Congress where he warned that “If this technology goes wrong, it can go quite wrong.”
Last week a group of industry leaders from OpenAI, Google’s DeepMind, Anthropic, and other AI research groups, posited that the AI they were working on could pose an extinction-level threat one day, on par with a pandemic or a nuclear war.
Clearly, a lot of people are worried about AI technology growing beyond human control.
I think a lot of this is simply conjecture and premature. The intention behind this hysteria appears to want to impress upon lawmakers the importance of implementing oversight on AI development and the media is running with it.
“AI will destroy the world” makes for great headlines and highly clickable content.
The good news is lawmakers across the globe have been motivated to understand how this technology is evolving and its potential to impact our lives, and they’ve started to build regulations around these concerns.
As a business leader and a tech leader invested in AI growth, I wanted to take a closer look at the fallout from some of these potential regulations. Since AI regulations in the US are still very nascent, in some instances, I’m extrapolating based on previous examples of tech regulations in action.
There are benefits to regulation
Much needed clarity
These regulations can help set global and US-based standards for AI practices. For example, the OECD Principles on AI, adopted by over 40 countries, provide international standards for AI systems that promote transparency, security, and fairness.
Such policies enable companies to navigate AI implementation across multiple jurisdictions with greater ease, aiding in international collaboration and expansion.
An equal playing field
AI regulations can act as unseen referees, ensuring that businesses are able to compete within the same rules. They can also mitigate data breaches and reduce risk, which is a significant factor considering the amount of data that goes into building and training big data AI models.
Take the European Union’s General Data Protection Regulation (GDPR), for instance. It’s a ground-breaking law that champions data protection, preventing companies from twisting AI for unfair gains. Instead of stifling competition, it has sparked a wave of creativity, forcing companies to innovate within ethical boundaries. The result? A more balanced and fair business playground.
Push for ethical AI development
AI regulations can also instigate a cultural shift within organizations towards ethical AI practices.
Microsoft, for instance, has been at the forefront of responsible AI usage, establishing an AI ethics committee to review its AI applications in light of the emerging regulations.
Such measures have a ripple effect on organizational processes, encouraging businesses to adopt ethical, transparent, and responsible AI usage.
But leaders need to prepare for…
Increased cost of deployment
Regulations can increase the cost of developing and deploying AI systems and can limit the ways in which companies can use AI. However, regulations can also provide clarity and certainty for businesses, which can be beneficial in the long run.
Business leaders should be aware of the potential impact of AI regulation on their operations. We should stay informed about proposed regulations and should engage with policymakers to ensure that their voices are heard. We also need to start investing in ethical and responsible AI to ensure that AI systems comply with existing and future regulations.
An impediment to innovation?
AI thrives on the freedom to experiment, test, and iterate. Over-regulation could stifle creativity and potentially hamper technological advancements. Just look at the way that the development of 5G mobile networks has been repeatedly bogged down over multiple regulatory hurdles, slowing down both rollout and active deployment.
Poorly designed and hastily emplaced regulations can impede not just the development of technology but also equitable access to it. A good, recent example of that would be the 2017 decision to revoke net neutrality laws in the U.S. The repeal allowed ISPs to potentially throttle internet speeds or charge more for certain types of content. This could limit innovation in the digital space as small startups may not be able to compete with larger companies that can afford to pay for faster internet speeds.
The issue is even more acute for multinational corporations, which have to comply with a multitude of regulatory frameworks across different jurisdictions, leading to complex compliance landscapes.
A need for re-education
In 2022, a PwC report showed that 85% of businesses were reevaluating their AI systems for regulatory compliance, triggering the development of new procedures and the education of employees on compliance implications.
As the landscape of AI evolves, keeping abreast of new regulations becomes an indispensable task for organizations. Not only do these rules shape the external business environment, but they also influence internal operations, impacting how employees handle AI technologies.
It’s crucial for organizations to invest in continuous education programs to help their staff navigate the intricate web of AI regulations. This fosters a culture of regulatory compliance and strengthens the organization’s resilience against potential legal and reputational risks.
Moreover, an informed workforce can leverage AI more effectively, striking a balance between harnessing AI’s potential and respecting its ethical and legal boundaries.
Conclusion
The introduction of AI regulations in the business landscape presents a mixed bag of challenges and opportunities. While added costs and potential innovation constraints may seem daunting, these regulations also offer a chance to review and refine AI strategies, promote ethical AI practices, and bolster consumer trust.
By equipping themselves with the necessary resources and agility, businesses can navigate the complex terrain of AI regulation and exploit AI’s immense potential responsibly and compliantly.
Nick Shah is the Founder and President of Peterson Technology Partners (PTP), Chicago’s premiere IT staff augmentation agency. With his relationship-focused mentality and technical expertise, Nick has earned the trust of Chicago-based Fortune 100 companies for their technical staffing needs.