Privacy protection is a societal need in a world that’s becoming ever more connected. As requirements for data protection toughen, ISO/IEC 27701 can help business manage its privacy risks with confidence. Here, Microsoft opens up about protecting data privacy in the cloud.
Enabling an AI-ready culture
Artificial intelligence holds the potential to deliver enormous benefits to society and a management system standard might be the answer everyone is looking for.
Artificial intelligence (AI) promises organizations to be 40 % more efficient by 2035, unlocking an estimated USD 14 trillion in new economic value to global GDP by 2030, according to PwC. This makes it the biggest commercial opportunity in today’s fast-changing business climate, all while improving billions of lives.
To unlock the full potential of AI, however, leaders must think differently. “We cannot ignore that we have to apply global standards to get the maximum benefit based on responsible use of AI technology,” says Microsoft’s Jason Matusow. These standards, he adds, would need to address business-to-consumer, as well as business-to-business (B2B), scenarios to be of value.
As General Manager of the Microsoft’s Corporate Standards Group, Matusow believes the production and adoption of AI International Standards will enable efficient, effective and trusted solutions that strengthen consumer, B2B and regulator confidence. “AI will augment human capability,” he explains, “opening the door to enormous new opportunities for every industry. It will empower individuals to achieve more in their daily lives.”
Digital transformation has taken root on a global scale. And things are only going to get more digitized as the world embraces the ability to convert data. A new report from the World Economic Forum (WEF) states that, by 2022, 60 % of the global GDP will be digital. In three short years, it observes, there will be “very little distinction between the digital economy and the economy, or between digital society and society”.
AI is running in the background of our daily lives continuously. Anything from creating new business models for jet engines and financial services to improving the traffic flow in smart cities is leveraging the opportunities of this digital transformation. Its benefit to society and individuals is so vast and immense that it cannot be reduced to figures alone. Nonetheless, as the transformative potential of AI becomes clear, so, too, have the risks posed by unsafe or unethical practices of such technologies.
Cybersecurity, privacy and data governance are all part of the responsible AI story. This was highlighted in the Davos Agenda organized by the WEF to foster responsible AI leadership. The platform sheds light on how the world is trying to tackle these issues while emphasizing that a lack of global consensus is holding back the accelerated adoption of the technology and the benefits it could bring.
The impact of AI will always be measured in human terms.
For many AI experts, creating an agency of trust will expand opportunity for every sector. The key is to start with “responsible” AI standardization. At the heart of this work is subcommittee SC 42, Artificial intelligence, whose ideal outcome is to create an ethical AI-enabled society. Working under ISO/IEC JTC 1, the information technology arm of ISO and the International Electrotechnical Commission (IEC), the expert group on AI is making headway on a ground-breaking standard that, if accepted, will offer the world a new blueprint to facilitate an AI-ready culture. This management system approach will establish specific controls, audit schemes and guidance that are consistent with emerging laws, regulations and stakeholder needs.
However, a lot still needs to be done. According to New York University’s AI Now Institute, based on the current AI adoption rates, only North America, Europe and China will capture roughly 80 % of the economic benefits brought by AI, leaving just 20 % for the remaining two-thirds of the global population. If this trend continues, there will be a huge missed opportunity to significantly enhance the lives of billions of people and improve the state of the world.
The time is now
There has never been a more relevant time for international standards development in the field of AI. Traditionally, AI had focused on large-scale problems that were either too hard or complex to solve with traditional methods. This is no longer the case. As the need for AI-based systems has grown exponentially over the years, the cutting-edge technology is finding more mundane applications. But the barriers for its broad adoption, combined with a strong demand for global consensus, have now made the ecosystem ripe for standardization.
Wael William Diab, Chair of SC 42, believes standards can enable an AI-ready culture and fuel the digital transformation. “While there is no single silver bullet to unlocking the potential of AI and enabling the promise of digital transformation, the importance of standardization cannot be overstated,” he says. The holistic approach will look at the entire AI ecosystem, as Wael explains. “Having a management system standard is an important part of that strategy, which is ultimately aimed towards continual improvement of a system.”
ISO and IEC strive to dynamically react to emerging industry needs. Together, the two organizations are leveraging an ecosystem approach to accelerate AI adoption whilst simultaneously addressing fairness, accountability and ethical concerns.
AI is running in the background of our daily lives continuously.
A management system approach
Collaboration is central to making sure standards reflect how organizations are using AI and balancing the risks with commercial reality. “ISO and IEC offer multilateral collaborations which can help us maximize the benefits of AI. By removing barriers to technology adoption, standards simultaneously and proactively ensure that societal concerns are addressed,” says Diab. “The diversity of stakeholders in SC 42 can ensure better standards and, ultimately, broader adoption.”
So a fit-for-purpose approach is the answer. In fact, as Wael suggests, “it is key to solving one of the most pressing governance issues of the 21st century”. Standards can play a constructive role in fostering the widespread use of responsible AI. For example, a management system standard (MSS) can establish common building blocks, and risk management frameworks, for companies, governments and other organizations.
With an MSS approach, the implementation of AI technologies will:
- Enable organizations to dynamically map their work to the regulatory and societal requirements captured through the MSS
- Be a trust mechanism that will facilitate B2B contracting
- Establish a baseline that can be verified through audit and/or conformity assessment
As Wael explains, the SC 42 ecosystem approach, of which the MSS is a key part, ensures that stakeholders of many different backgrounds can establish a framework. A framework that, according to the world-renowned AI expert, enables organizations to speak the language necessary to implement and reap AI’s full potential. “Novel standards like the management system standard go a step further to addressing issues of confidence and pulling all the work together.”
With AI’s impact on industry and society accelerating every day, and all the uncertainties around how it is being managed, it is imperative that we ensure the technology is used ethically for the sake of global public interest. Microsoft’s Jason Matusow agrees: “As a platform provider, success for us is when the economic benefit to our customers collectively dwarfs the economic benefit to our business. The work of SC 42 will be an important enabler for marketplace expansion in which every organization can participate and benefit.” In fact, all organizations will reap rewards if the AI standards follow the same consistent, risk-based approach already in practice for cybersecurity and privacy.
The impact of AI will always be measured in human terms, in the enhancement of people’s lives, and ISO and IEC will continue to create a set of standards that support the full spectrum of global interest. As the technology works itself into almost every aspect of our lives, AI will need protecting against negative uses, both deliberate and unintended, for the sake of individual rights, human safety and societal welfare.
The opportunity, and challenge, is to use the standards process effectively to promote, develop and realize the promise of responsible AI, delivering business growth, improving services and protecting consumers. The work of SC 42 is an important thread in the global tapestry needed to build a safer, interconnected future that we can all look forward to.