Newsletters




Through the Looking Glass: Democratizing AI With Emerging Technologies

Page 1 of 2 next >>

Are we finally seeing the democratization of artificial intelligence (AI)?  New research out of OpenAI and the University of Pennsylvania suggests that generative AI, thanks to open and widely available tools such as ChatGPT (Generative Pre-trained Transformer) models and GPT-4, will be part of the jobs of at least 80% of occupational groups. “The influence spans all wage levels, with higher-income jobs potentially facing greater exposure”—particularly jobs requiring college degrees. Still, the researchers add, “considering each job as a bundle of tasks, it would be rare to find any occupation for which AI tools could do nearly all of the work.”

AI will be everywhere—and it's important to point out that democratized AI is already in common use. “Every day, we leverage the immense power of artificial intelligence when we use auto-complete on our phones to message someone or when we click on a recommended show on our favorite streaming platform,” Aravind Chandramouli, head of the AI center of excellence at Tredence, said. AI “has been at the consumer's fingertips for over a decade, with the early trailblazers, such as Apple's Siri and Amazon's Alexa, easily accessible with any smart device,” agreed Muddu Sudhakar, CEO of Aisera. “ChatGPT has accelerated this. Now, any consumer can access AI in a consumable manner, packaged in easy-to-understand human-like conversations.”

Generative AI technologies take this widespread use of AI to a new level, enabling end-users across the board to design their own approaches. Large language models (LLMs) such as ChatGPT for text and code, and DALL-E 2 for creating novel visual art, are “truly bringing AI to the masses, with a recent explosion in usage of these tools for all sorts of use cases,” said Jared Endicott, director of the data and AI Studio and principal data scientist at Launch Consulting, a division of The Planet Group. 

This has implications for all forms of user interactions with these systems as well—such as natural language processing (NLP), said Udo Sglavo, vice president of advanced analytics for SAS. “Text analytics —powered by natural language processing—helps turn text data into useful information for better searching of data and, yes, for helpful chatbots. NLP supports generative AI, as does deep learning and reinforcement learning (RL). What’s exciting is that very smart people are experimenting and testing NLP, RL, and other technologies to make them better. So we can achieve analytics for everyone, everywhere. As organizations continue to embrace AI, machine learning, computer vision, and IoT analytics to gain valuable insights, people of all skill levels will be empowered to participate in the analytics process through low- or no-code options.”

Generative AI platforms such as ChatGPT “has made AI tangible, revolutionizing the perception of AI for the consumer,” said Sudhakar. “Before ChatGPT, many executives did not know generative AI existed, much less how it could improve their business operations. Now, enterprises are recognizing the potential and want to leverage its benefits. While this technology is still being adopted across the enterprise, this is a crucible moment in AI.”   

The recent advancement in genAI “is a lightbulb moment,” said Artem Kroupenev, vice president of strategy at Augury. “Meaning that just as the lightbulb was a killer application that drove the mass adoption of electricity, we’re now seeing one of the first use cases that will drive mass adoption of AI technology. Like with other foundational technologies, we will need to work through many questions and concerns, but overall, AI is an augmentation of our minds, and it will make us smarter.” 

ETHICS AND DEMOCRATIZED AI

A concern with the widespread proliferation of AI is the ethics issues it brings. AI introduces potential issues with bias within its output. “Enterprises should put into place guidelines and best practices for using AI tools within the workplace,” said Endicott. “While they are incredibly promising, their efficacy is still not 100%, so there needs to be some oversight in how they are used. For example, an employee could use ChatGPT to help write a blog article, but the employee should also vet and validate the output and make appropriate edits before this article is published. Organizations should also be transparent and make sure that they make appropriate attributions when AI has assisted with the creation of materials. Enterprises that seek to systematize the output of generative AI will want to have data scientists and engineers driving these types of initiatives.”

An important piece of the AI democratization movement is adoption of ethics that ensure fairness and privacy. Generative AI platforms such as ChatGPT “has given the industry a unique opportunity to discuss ethics in the democratizing AI conversation,” CF Su, vice president of machine learning at Hyperscience, pointed out. “To effectively democratize AI, enterprise organizations must prioritize ethics, establishing internal ethics committees and organization-wide frameworks to ensure technology is developed and evaluated fairly.”

Generative AI may inherently incorporate greater ethics than previous forms of AI, Chandramouli related. “Chatbots have been around forever. The challenge has been the gap between the hype and the reality of how chatbots used to work. Despite being hyped as conversational agents, previous-generation, non-deep learning, chatbots could only answer a narrow range of questions. While the latter deep learning models performed better in answering open-domain questions, they were susceptible to users inducing them to give inappropriate responses, which limited their adaptability. The ChatGPT chatbot has proved to be an excellent tool for answering complex questions simply and effectively. It also has guardrails for ensuring that the responses are appropriate and it is much harder to make the ChatGPT give inappropriate responses.”

Sudhakar has a somewhat different take, acknowledging that while the potential of AI "is invigorating, both for the consumer and the enterprise, … there are limited regulations around AI ethics."

The White House’s AI Bill of Rights “is a great place to start for those beginning their ethics journey,” Su advied. “Organizations can use this framework to understand and evaluate whether their use of AI abides by these guidelines—if they don’t, the committee can serve as a guiding force to get the technology on track. Once the technology is deemed fair and ethical, organizations can consider democratizing it. If you deploy technology that is easy to use and accessible to many, without considering the ethical implications or back-end development, we could have unintended consequences that will be difficult to correct, like the loss of trust.” 

THE DATA FACTOR

For AI to thrive, it requires a great deal of data from many sources. The good news, as many data managers know, is there is no shortage of that. The challenge has been incorporating and making sense of the growing volumes of unstructured data moving through enterprises. “Enterprises generate tons of structured and unstructured data, making effective use of which today is beyond human capacity,” said Kroupenev. “Specialized AI solutions are already widely used for accurate insights from sensors and traditional systems of record, management, and execution. But the benefit of genAI is that it can help make sense of the boatloads of unstructured conversational data that’s generated across the enterprise. Recorded customer interactions, ideas, meeting notes, calls, emails, documents, contracts, slack chats, presentations, etc. are a treasure trove of data that will drive a 100x increase in the level of insight and knowledge AI offers to enterprises.”

“With democratized AI, everyone, not just data scientists, can use AI,” said Chandramouli. “It solves the shortage of skilled data scientists and enables many associates to solve AI-related problems. As a result, the company can implement more AI solutions faster. Enterprises, however, must focus on data quality since poor data quality leads to unreliable models and results.”

Sudhakar urged the use of pretrained language models (PLMs) that will fit into enterprises. “These ready-to-go, easily deployed models perform AI and ML tasks specific to the needs of the business without the exuberant costs of hiring in-house talent to create a product from scratch,” said Sudhakar. “Specifically, PLMs in the form of LLMs are being deployed in the enterprise to engage in inquires and automate workflows. LLMs allow employees and customers to engage in human-like conversations through conversational AI: chatbots and virtual assistants. LLM models apply context to user queries, from industry verticals spanning healthcare to manufacturing to retail, adding the proper scope and domain insights to the nature of the question. This allows enterprises to enable self-service on user inquiries with 95% accuracy and accurately transact specific workflows, leading to higher employee productivity and customer satisfaction.” 

As enterprises incorporate AI within their technology stack, “most of it will be integrated and take shape within applications, heightening search capabilities, enhancing conversational chatbots, automating business processes, and accelerating data analytics tools for decision-making,” said Su. “With further investments in underlying technologies, such as NLP and machine learning, we can anticipate that AI-based solutions will become more efficient and create new opportunities for AI-fueled innovations.” 

Page 1 of 2 next >>

Sponsors