I’ve long been saying that AI has transformative potential and is here to stay. And of course I’m not alone.
Of 600 business owners polled by Forbes Advisor, 97% believe ChatGPT will help their business, and in an ISACA poll of thousands of digital professionals in March, 70% say their staff is currently using AI. This also fits with McKinsey’s findings, that 72% of companies have already begun adopting AI by May, 2024.
Some organizations are well prepared for this integration, like German telecommunications company Deutsche Telekom, who told the Harvard Business Review (also in May):
“We anticipated that AI regulations were on the horizon and encouraged our development teams to integrate the principles into their operations upfront to avoid disruptive adjustments later on. Responsible AI has now become part of our operations.”
In 2021, they distributed AI Engineering and Usage guidelines for utilizing AI in the development process. This policy included specific actions to be taken before, during, and after launch, for all projects involving AI.
Admirable, but how common is this?
According to ISACA, in 2024, 85% of organizations have no AI policies in place at all. 40% offer no AI training.
Here we see a truly significant, and potentially dangerous gap: 7/10 companies using a revolutionary new tool at work, with less than a fifth implementing AI policies and guidance.
Current Challenges in Effective AI Implementation
AI has tremendous potential to transform the workplace, but that does not mean it comes without concerns for businesses.
From demonstrations of clear bias, to hallucination, privacy risk, to security, not to mention the need to be ready for ongoing regulation, AI implementation must be done wisely.
Implementing AI tools without sufficient guidance, let alone requirements and safeguards, opens clear risk in each of these areas.
Not to mention the skills gap. In the ISACA results, 85% of those surveyed believe workers need to increase AI-related skills and knowledge within the next 2 years to advance, or even retain, their work.
While initial fears of AI causing massive layoffs in the near term may have proved excessive, AI, like all technological breakthroughs, will make some positions obsolete. Consider how many human spellcheckers and typesetters, for example, that we employ today.
[For more on the topic of AI as job-killer or work-booster, see this edition of The PTP Report.]
Already there are far more openings for AI-trained professionals than there are workers to meet this need, so it is up to workers to be proactive in upskilling and reskilling, just as it is beneficial for organizations to help them do it.
Data is another area of potential challenge. AI is what it is because of its data—so that means companies need their data to be as accessible, uniform, and well-organized as possible. This has always been a challenge, with varying systems across varying departments, and while AI can also potentially help to resolve these issues, that requires preparation, oversight, and clarity of purpose. The alternatives can be disastrous.
AI’s demonstrated biases have been regularly in the news (just ask Google), and without proper oversight, workers may put both personal, and workplace privacy at risk with unguarded prompting and public models.
The risks are real and the fallout can both damage a company’s reputation, and subject them to potential litigation.
Best Practices to Future-Proof Your Organization
Whether you have generative AI usage happening with no communicated policy, are currently wrestling with AI implementation best practices, or have yet to even get started (or are in the early stages), here are some AI implementation strategies I recommend for closing this gap:
Don’t Wait to Create Practical but Comprehensive Strategic Guidance
First and foremost, companies need a clear, actionable AI implementation roadmap.
This involves setting realistic goals, understanding the business problems AI will (and won’t) be able to solve, and aligning it with your culture and objectives.
In the Deutsche Telekom example, they made this very effort, only to find their initial policies too abstract for workers to make actual use of.
This guidance can (and likely must) come in stages, but it should be practical, by concerning data privacy and protection from the outset.
Before your workers begin using generative AI prompts, for example, they must understand what may and may not be disclosed, and, if using public AI tools, be aware of settings that may be configured for protection.
It is possible that some, in the rush to not be left behind, will leap to use before understanding the ramifications of their actions, and it is the responsibility of leadership to make clear from the outset what is, and is not potentially dangerous.
Start Small and Iterate
Once you have your first guidance in place, a great way to get hands dirty with AI can be through internal incentives, such as a “no-code hackathon” as recommended by Wharton School professor and AI scholar Ethan Mollick.
In his own classes, he put students to work designing AI projects that could simplify their projects, and such a practice has likewise proven successful in the workplace. Such a contest (for example): reward innovation, shows leadership’s commitment to AI, encourages hands-on use, and fosters both teamwork and group learning.
Best of all, it can be highly affordable and easy, and in the end give a great platform for initial training, practical application of your guidance, and discussions of best practices.
Maintain the Right Teams with Open Communication
Organizations committed to AI will most certainly need to upgrade their staffing in at least two ways: by recruiting skilled professionals, and upskilling current employees.
The dangers to privacy, hallucination, and bias, not to mention oncoming regulation, are such that compliance teams should be involved from the start.
I recommend building cross-functional teams where possible, as soon as possible, combining technical skill with domain expertise, and keeping ethics teams in close contact with digital teams. This also increases the diversity of thought involved, helping you be ready for the rapid changes that are sure to continue.
And given the complexity of the task at hand, the more open the communications, the better.
Continuous Learning and Ongoing Support
One method that Deutsche Telekom used was to create a central email account, where emails concerning AI implementation and ethics could be submitted. Their compliance team also consulted on projects, issuing ethical certifications, and even conducting random audits of 10% of the AI projects each year.
These actions were not about enforcement as much as support—ensuring that the guidance was being put into use, on an ongoing basis.
As rapidly as the technology is changing, this process of adoption will be continuous in the near term, and its users must adopt such a mindset. This means ongoing education, as well as continued flexibility. The experts of today are already behind if they are not also learning what’s coming in the pipeline.
Collaborating with AI Experts and External Partnerships
Continuing on the point above, where do you turn for expertise? It’s essential to identify these sources early and often, including industry groups, as well as academia, and employing external expertise, too.
[Contact PTP for our help in leveraging AI solutions in your business.]
Given the resource-intensive nature of AI development, partnerships will be an invaluable part of everyone’s AI experience.
Invest in Scalable Solutions for Continued Relevance
If you are like me, you have tools in your garage that you’ve used for decades. And when you have a need, they still get the job done today.
This model, sadly, rarely applies to technology today. While on-prem solutions do still make sense for many businesses, in certain situations, flexibility and scalability are going to be absolutely key with AI over the coming decade.
I recommend implementing solutions now that can grow with you, such as cloud-based AI platforms and modular AI tools, which will give you greater flexibility. Ironically, such solutions can require even more awareness on your internal governance, optimization of resources, and oversight. I advise staying on top of these challenges with proactive principles like FinOps.
Conclusion
AI is here, and implementations are surging. But a bad implementation may do worse than just fail to improve your bottom-line, it may expose you to significant risk.
By developing and implementing clear and practical guidance, starting small, building your teams from within and without, embracing continuous learning and expertise, and staying adaptable, you can begin to close this gap today.
I believe that generative AI already offers tremendous value for organizations across disciplines, and when implemented wisely, it can be truly transformative. More of us should follow the Deutsche Telekom lead today, before it’s too late.
References
How to Implement AI — Responsibly, Harvard Business Review
How Businesses Are Using Artificial Intelligence In 2024, Forbes Advisor
The state of AI in early 2024: Gen AI adoption spikes and starts to generate value, McKinsey