The AI gold rush: Why risks and rewards remain a balancing act

2025/01/15 Innoverview Read

In the race to capitalize on the transformative potential of AI, enterprises may be taking major risks with their organization’s future by deploying AI solutions without fully considering the ethical, governance and security implications.

A recent study from global SaaS solutions provider Stibo Systems, “AI: The High-Stakes Gamble for Enterprises,” found that a full 49% of business leaders admit they are not prepared to use AI responsibly, 79% of organizations do not have bias mitigation policies and practices in place, and 54% of organizations have not implemented new security measures to keep up with AI integration — but only 32% of business leaders admit they’ve rushed AI adoption.

Gaps in literacy, ethical usage and organizational preparedness are critical concerns, says Gustavo Amorim, CMO at Stibo Systems, but it’s a balancing act.

“Companies need to adopt AI to stay competitive — to realize the major benefits like efficiency, higher productivity, lower costs and greater innovation,” Amorim says. “But in the need to move forward, they’re often leaving business and organizational readiness behind.”

Part of the wave of adoption comes from a shift in how business leaders are viewing AI — today it’s overwhelmingly considered to be an overall enabler: nearly 90% of business leaders surveyed said they are eager to use the technology as their partner in critical decision-making.

“It’s not that leaders don’t see the risks or don’t realize and acknowledge that there are some risks involved,” he explains. “It’s that we’ve seen the short-term business benefits at this point, and long-term risks and implications have not come home to roost for many organizations yet. But the price tag is steep, including reputational damage, regulatory penalties and the erosion of trust among customers and stakeholders.”

Where change management fails technology adoption

The three pillars of business readiness and change management are technology, people and process. From an AI perspective, it has become far easier, and far faster, to implement an AI tool, flip the switch — and then consider potential consequences. Digging into the data to ensure it’s fair and free from bias, and fully secure throughout the AI pipeline, takes a great deal of change management effort and time. Unfortunately, that data, and how it’s used, is also foundational to the actual results that an AI initiative produces.

“Companies are not necessarily taking the steps and the time to ensure that those things are done in parallel with adoption,” Amorim says. “Changing an internal process, or training the people who are going to be using the technology, giving them the skills required to eliminate bias and write fair treatment and data privacy into the DNA of a strategy, is a major hurdle.”

Governance, too, is a challenge and often overlooked. Without putting standards and processes in place around managing the inputs and the outputs, instead of running in parallel with other business processes and becoming a regular part of how business is done, outputs become a problem to be managed.

All these issues impact things like how customer data is used in AI, for instance — sharply increasing the potential for problems like non-compliant data use, or data breaches. However, the Stibo Systems study shows that these aren’t currently major concerns for most business leaders, and they haven’t taken those preliminary steps. AI adoption is simply outpacing the development of ethical guidelines, 61% of leaders report, while 49% say they’re not prepared to use AI responsibly, even though 65% feel confident in their AI literacy skill. Unfortunately, that confidence in their preparedness is not reflected in their organization’s AI policies and procedures. For instance, a full 69% of organizations have not implemented any data governance training as part of their AI strategy.

AI literacy: the foundation of an ethical framework

Data is the foundation of AI, but humans remain the single most important element of an AI strategy right from the jump. Models are created by humans, and humans are in charge of choosing and preparing the data that’s required to train those models, the data that leads to conclusions and outcomes. But AI literacy also includes the business implications of the technology, and understanding what kinds of business processes can be run, and how they can be run fairly and accurately, and that they comply with a company’s internal policies.

And because it’s a technology that’s evolving at a breakneck speed, and one that self-learns, adapts and becomes more intelligent, based on the data inputs it receives, you’re never done. Part of data literacy includes continuously analyzing the outputs against certain criteria, which will evolve just as rapidly.

AI literacy and organizational preparedness, like most technology initiatives, starts at the top. It’s not just sponsorship of AI initiatives, but the top executive level actively engaging with the subject, offering education around how AI is incorporated into the day-to-day of an organization’s business processes, and setting an example around the importance of responsible AI.

“This is usually not a conversation that most senior executives will engage with,” Amorim says. “Imagine a CMO, a CFO or a CEO talking about data bias and how that might become a corporate risk for the organization,” he says. “It’s not a common agenda, which is why it’s ideal to start there.”

From there, it’s a matter of turning that into action, by establishing cross-functional teams that can develop policies, standards and guidelines.

“Most every company has guidelines and standards around using social media in the workplace, but not every company has guidelines on how you should use AI — and this is essential,” Amorim says. 

(Copyright:VentureBeat The AI gold rush: Why risks and rewards remain a balancing act | VentureBeat)