More companies in every industry are adopting AI to transform business processes. But the success of their initiatives depends on having the right people on board, not just data and technology.
An effective enterprise AI team is a diverse group that encompasses far more than a handful of data scientists and engineers. Successful AI teams also include a range of people who understand the business and the problems it’s trying to solve, says Bradley Shimmin, chief analyst for AI platforms, analytics, and data management at consulting firm Omdia.
“The technologies and the tooling we have available is skewing more toward enabling and empowering domain professionals, the business users, or the analytics professionals to take direct ownership of AI within companies,” he says.
Carlos Anchia, MD of AI at Acacia Advisors, agrees that AI success rests largely on establishing a well-rounded team with a diverse range of advanced skills, but doing so is challenging.
“Identifying what makes a highly efficient AI team may seem like an easy thing to do, but when you examine the detailed responsibilities of individuals on successful AI teams, you quickly come to the conclusion that building these groups is extremely hard,” he says.
Now, with the emergence of gen AI, the projects have expanded to include nearly all areas of corporate activity, in nearly all corporate functions, and promise to become transformative — not just for technology companies, but in all industries. This means that AI now requires more key people, with a much broader range of skills and responsibilities than ever before.
“We see organizations adding variety and complexity to their AI projects, both for internal and external use,” says Meagan Gentry, AI practice lead at IT consultant Insight. “That calls for new roles, often positioned in centers of excellence or innovation teams. These roles include AI engagement managers, AI governance strategists, and LLM operations engineers, who are all critical to the success of rolling out safe, scalable, and high-ROI generative AI applications.”
And it goes beyond purely technical skills.
“Executives need clarity on the performance of their AI investments and a trusted framework for pivoting quickly when an investment or initiative isn’t making the impact expected,” she says. “At the same time, leaders also need to know how their teams are mitigating risks like security and privacy vulnerabilities, biases and trustworthiness of source data, and the robustness of architectures as the technical landscape changes.” Interdisciplinary roles support these activities and are critical to success, she adds.
To help you assemble your ideal AI team, here is a look at 10 key roles found in well-run enterprise AI teams today.
Data scientist
Data scientists are the core of any AI team. They process and analyze data, build machine learning (ML) models, and draw conclusions to improve those already in production. A data scientist is a mix of a product analyst and a business analyst with a pinch of ML knowledge, says Mark Eltsefon, staff data scientist at Meta. “The main objective is to understand key metrics that have a major impact on business, gather data to analyze the possible bottlenecks, visualize different cohorts of users and metrics, and propose various solutions on how to increase these metrics, including making a prototype of the solution,” he says, adding that when working on a new feature for TikTok users, it’s impossible to understand whether the feature benefits or alienates users without data science. “You don’t understand how long you should test your feature and what exactly you should measure,” he says. “For all of this, you have to apply AI methods.”
ML and LLM operations engineer
Data scientists may build AI and ML models, but it’s engineers who implement them. They focus on the operational aspects of ML and LLMs, says Insight’s Gentry. “They ensure that models are deployed, monitored, and maintained effectively,” he says. For gen AI, that might include integrating the LLM functionality into existing enterprise systems — a very rapidly changing area of technology. The ML engineer’s job is a bit better defined since it’s been around longer, but it’s also changing.
“In today’s age of generative AI, ML models aren’t necessarily built internally and hosted by enterprises,” says Dattaraj Rao, chief data scientist at Persistent Systems. “In many instances, the ML engineer role is transformed into compiling data from multiple sources, creating prompts that can make the LLMs generate juicy content, often invoked via an API call.”
In addition, he says, data architecture skills are very much in demand, with applications requiring the consumption of unstructured data in vector databases, and scaling them for billions of rows of text content.
“Finally, different approaches to interacting with LLMs show potential with patterns like reflection, chain of thought, and tool usage,” he says, “leading to agents and agentic workflows. But while agentic workflow tends to be autonomous, the ML engineer still needs to set up the connections, validate if the correct data is flowing, and set evaluation criteria to ensure LLM calls are responding correctly.”
AI prompt engineer
Prompt engineering didn’t even exist as a term two years ago. Today, everyone who works with technology is expected to have some understanding of prompt engineering, know how to ask questions, and be aware of the AI’s limitations. But when it comes to gen AI integrated into corporate systems, prompt engineering becomes a much more involved and technical task. A prompt needs to include all relevant context, such as the role the AI should take when giving an answer, the style, length, and format of its response, as well as all the relevant information, guardrails, and more. Prompt engineers might need to do significant experimentation to discover the best-performing prompt for each use case, and then continue to adapt the prompts as the AI models evolve.
Data engineer
Data engineers build and maintain the systems that make up an organization’s data infrastructure. With traditional ML, about 80% of the work is in handling and preparing data, says Matt Mead, CTO at information technology services company SPR.
Traditional ML takes a lot of data and needs experts who are good at math and statistics, he adds. “But generative AI tools, like the large language models most companies use, don’t need as much data, and they’re way quicker to learn — sometimes just in a few hours,” he says.
Companies deploying pre-built chatbots, integrating simple LLM queries with APIs, deploying copilots, or using the generative AI tools built into enterprise applications like Salesforce may need little to no data engineering. However, companies building custom solutions by fine-tuning models, using RAG embedding to provide the AI with up-to-date information, or building models from scratch will still need the training data, and the ability to manage it.
Data engineers are crucial to AI initiatives because data needs to be both collected and made suitable for consumption before anything trustworthy can be done with it, says Erik Gfesser, owner of consulting firm Fesswise. This role hasn’t changed much over the past two years, he says. “And the criticality of this role continues to increase,” he adds. “Without data engineers, AI initiatives will simply grind to a halt.”
Domain expert
The domain expert has in-depth knowledge of a particular industry or subject area. This person is an authority in their domain, can judge the quality of available data, and can communicate with the intended business users of an AI project to make sure it has real-world value.
These subject matter experts are essential because the technical experts who develop AI systems rarely have expertise in the actual domain the system is being built to benefit, says Max Babych, CEO of software development company SpdLoad. “Domain experts can provide critical insights that will make an AI system perform its best.”
When Babych’s company developed a computer-vision system to identify moving objects for autopilots as an alternative to LIDAR, it started the project without a domain expert. Although research proved the system worked, what SpdLoad didn’t know was that car brands prefer LIDAR over computer vision because of its proven reliability, and there was no chance they would buy a computer vision–based product.
“Think about the business model, then attract a domain expert to find out if it’s a feasible way to make money in your industry — and only after that try to discuss more technical things,” he says.
Moreover, domain experts can be vital liaisons between customers and the AI team, says Ashish Tulsankar, CTO at edtech platform iSchoolConnect.
“This person can communicate with the customer, understand their needs, and provide the next set of continuous directions to the AI team,” he says. “And the domain expert can also keep track of whether the AI is implemented ethically.”
AI designer
An AI designer works with developers to ensure they understand the needs of human users. This role envisions how users will interact with AI and creates prototypes to demonstrate use-cases for new AI capabilities. An AI designer also ensures that trust is built between human users and an AI system, and that AI learns and improves from user feedback.
“One of the difficulties organizations have in scaling AI is that users don’t understand the solution, disagree with it, or can’t interact with it,” says Shervin Khodabandeh, senior partner and MD at consulting firm BCG’s AI business in North America. “Organizations that are getting value from AI, their secret is actually just that they get the human-AI interaction right.”
BCG thinks about it in terms of a 10-20-70 rule, which is that 10% of the value will be algorithms, 20% is the tech and data platforms, and 70% will come from business integration, or tying it to the strategy of the company inside the business processes, he says.
“That human-AI interaction is absolutely key and is a huge part of that 70% challenge,” he says, adding that AI designers will help you get there.
Product manager
The product manager identifies customer needs, and leads the development and marketing of a product while making sure the AI team is making beneficial strategic decisions. “In an AI team, the product manager is responsible for understanding how AI can be used to solve customer problems and then translating that into a product strategy,” says Dorota Owczarek, AI product lead at AI development company Nexocode, who was recently involved in developing an AI-based product for the pharmaceutical industry to support the manual review of research papers and documents with natural language processing.
“The project required close collaboration with data scientists, ML engineers, and data engineers to develop the models and algorithms needed to power the product,” she says.
Considering her position, Owczarek was responsible for implementing the product roadmap, estimating and controlling budgets, and handling cooperation between the tech, user experience, and business sides of the product.
“In this particular case, as the project was initiated by business stakeholders, it was especially important to have a product manager who could ensure their needs were met while keeping an eye on the overall goal of the project,” she says, adding that AI product managers should have both technical skills and business acumen, and be able to work closely with different teams and stakeholders. “In most cases, the success of an AI project will depend on the collaboration between the business, data science, ML engineering, and design teams,” she says.
AI product managers also need to understand the ethical implications of working with AI, Owczarek adds. “They’re responsible for developing internal processes and guidelines that ensure the company’s products adhere to industry best practices.”
AI strategist
The AI strategist needs to understand how a company works at the corporate level, and coordinates with the executive team and external stakeholders to ensure the company has the right infrastructure and talent in place to produce a successful outcome for its AI initiatives. To succeed, an AI strategist must have a deep understanding of their business domain and the basics of ML. They must also know how AI can be used to solve business problems, says Dan Diasio, global AI leader at EY Consulting.
“Technology was the hard part years ago, but it’s now reimagining how we wire our business to take the best advantage of that AI capability or AI asset that we create,” he says, adding that an AI strategist can help a company think transformationally about how it uses AI. “To change the way [a company makes] decisions requires somebody with a significant amount of influence and vision to be able to drive that forward.”
AI strategists can also help organizations obtain the data they need to fuel AI effectively. “The data that companies have inside their systems today or inside their data warehouses really only represents a fraction of what they’ll need to differentiate themselves when it comes to building AI capabilities,” Diasio says. “A part of the strategist’s role is to look to the horizon and see how more data can be captured and utilized without overstepping privacy considerations.”
AI governance strategist
Gen AI’s emergence has put it firmly in the regulatory cross-hairs. Previous generations of AI brought with them data privacy and cybersecurity risks, but gen AI has the potential to do so much harm that an AI “kill switch” bill made it all the way to the governor’s desk in California before being vetoed, even as other bills, regulating such areas as deep fakes, have been signed into law. There are also laws in the works — or already in effect — in many other jurisdictions, including the European Union.
But it’s not just new regulations that companies need to watch out for. Cases related to copyright issues are working their way through their courts, and Air Canada was found to be responsible for the erroneous recommendations of its AI chatbot. There are also issues of bias, fairness, and ethics — issues which, if not properly addressed, could lead to bad publicity, a drop in employee morale and retention, and loss of market share. To address this, Insight’s Gentry recommends that an AI governance strategist be given responsibility to ensure that AI systems are developed and deployed responsibly, and to create frameworks and policies to govern AI use so there’s adequate compliance with regulations and ethical standards.
Chief AI officer
The chief AI officer is the lead decision-maker for all AI initiatives and is responsible for communicating AI’s potential business value to stakeholders and clients. “The decision-maker is someone who understands the business, business opportunities, and risks,” says iSchoolConnect’s Tulsankar.
In addition, the chief AI officer should know the use cases AI can solve, identify where the most significant financial benefit is, and articulate those opportunities to stakeholders.
“They should also chalk out how these opportunities need to be achieved iteratively,” he adds. “If there are multiple clients or multiple products across which the AI needs to be applied, the chief AI officer can break down client-agnostic and client-specific parts of the implementation.”
With the emergence of gen AI, the role of the chief AI officer is evolving as well, he says, “with a growing emphasis on accelerating the implementation of AI technologies to maintain a competitive advantage.”
Executive sponsor
The executive sponsor is a C-suite manager who takes an active role to ensure AI projects come to fruition, and is responsible to obtain funding for a company’s AI initiatives. Executive leadership has a significant role in helping drive the success of AI programs, says EY Consulting’s Diasio. “The biggest opportunities for companies often are areas where they break across particular functions,” he says.
A consumer products manufacturer, for example, has individual teams responsible for R&D, the supply chain, sales, and marketing, he explains. “The biggest and best opportunities to apply AI to help transform the business cut across all four of these functions,” he says. “And it takes strong leadership from the CEO or C-suite of a company to go after those changes.”
Unfortunately, senior management in many companies aren’t adequately versed in the potential of AI, says BCG’s Khodabandeh. “Their understanding of it is quite limited, and they often think of it as a black box,” he says. “They throw it to the data scientist, but they don’t really understand the new ways of working with AI that are required.”
Adopting AI is a big cultural change for many companies that don’t understand how a high-functioning AI team works, how the roles work, or how they can be empowered, he adds. “For 99% of the traditional companies adopting AI, it’s a hard thing,” says Khodabandeh.