By Kevin Manne
Throughout his career, retired human resources executive Bernard Brothman, BS/MBA ’80, witnessed the evolution of artificial intelligence.
His work in HR began in 1980, a pre-computer era, when it was common practice to sort résumés by hand: receiving them by mail, reading them all and manually sorting candidates into stacks of yes, maybe and no.
Three jobs and three companies later, he worked at M&M Mars, a world-leading snack food manufacturer, where he helped set up a recruitment portal on the mars.com website.
He selected the RecruitMax software, which was cutting-edge at the time. RecruitMax would search résumés for keywords from job descriptions to find potential candidates more effectively.
“The technology advanced throughout the 2000s and I was seeing artificial intelligence starting to do a lot of the work that we normally did ourselves,” says Brothman. “But candidates quickly caught on and they would import text verbatim from job postings into their résumés even if they weren’t necessarily qualified, so the AI would flag it as a match. They’d even put keywords in and change the font color to white so a person wouldn’t see them, but the machine would.”
Brothman says that’s just one example of why it’s important to have human oversight in AI-assisted processes to ensure fairness and prevent bias.
“You’ve got to do an audit to see what’s happening, because AI isn’t perfect,” he says. “We need to stop and look at what the AI is giving us and how the machine returned those results — did the AI decisions result in us bringing in candidates we shouldn’t have, or did it reject candidates we should’ve interviewed?”
In the School of Management, faculty have been studying the impact of AI in the workplace. One researcher Kate Bezrukova, chair and associate professor of organization and human resources, first became involved with AI in 2014 through grant writing for NASA, where she saw interest in learning how AI machine learning applies to humans.
In 2023, she published a study exploring factors that increase adoption of AI in teams and found that getting employees to use AI depends on two factors: employee attitudes toward the technology and the degree to which they can choose to work with it.
“The answer to the question about people collaborating with AI is more nuanced than simply, ‘Will they or won’t they?’” says Bezrukova. “Managers should be aware of a variety of responses when AI is introduced into the workplace.”
Bezrukova sees AI making sweeping changes on the job market, but maybe not in the way experts originally thought.
“Five years ago, we were thinking that transportation and manual labor jobs would be the first to go because intelligent machines and robotics were pretty much at the point of taking over,” she says. “Now, with AI, we’ve seen that white collar jobs like computer science are also being impacted — not necessarily in a bad way — but how we work will be changing. I’m an optimist, I think it will be changing for more accurate, more efficient solutions.”
According to a recent Forbes survey, almost all business owners believe ChatGPT will help their business, and nearly two-thirds believe AI will improve customer relationships. Meanwhile, automation is beginning to replace workers at fast-food restaurants and in grocery stores. And in Hollywood, writers and actors went on strike last year, demanding safeguards against AI.
Artificial intelligence is already affecting every aspect of business, from accounting and finance, to marketing, supply chains, HR and more — and its impact is growing every day. In the School of Management, faculty are conducting research to deepen our understanding of AI and how we can use it to change society for the better. Meanwhile, in workplaces around the world, alumni are at the forefront, putting this rapidly changing technology to work.
At a conference presentation in Hong Kong in 2018, Jasmeet Singh Gujral, MS ’12, predicted that “the future of AI will be conversational” — long before ChatGPT hit the market.
That prediction was spot on, especially for Gujral, who graduated from the School of Management with a Master of Science in management information systems. He’s now a product manager at eBay, where he leads the company’s efforts in using conversational AI to meet the support needs of its customers.
“Whenever customers need help, they get access to our Help Hub,” he says. “There, they have the option to interact with an AI-driven bot that is trained to understand and respond to customers' intents in the most personalized and conversational way possible.”
Gujral says developing AI for customer service involves creating complex conversational flows and making sure the system understands and responds appropriately to customer needs. But much like in HR, in customer service there’s only so much AI can do. So it’s critical to have AI that can recognize its own limitations and hand the discussion over to humans when needed.
“The AI needs to figure out who the best human agent is to serve the customer, and to empower that agent with the right data and the right insights in real time from the conversation the AI has had with the customer,” says Gujral.
Outside his full-time role at eBay, Gujral also is founder of Diaspora, an AI-driven company that provides universities with tools to help international students. His early-stage startup recently received $100,000 in pre-seed funding from UB’s Cultivator program and UB is among its first clients.
“We are enabling universities to help their international students through an AI assistant that will give them all the tools and services they need in a single, one-stop shop interface that is again conversational,” says Gujral.
Sanjukta Das Smith, chair and associate professor of management science and systems, says researchers are investigating how conversational AI agents express emotions and how that affects the humans on the other end of the discussion.
“In a customer service setting, whether and how an AI agent is emoting certain expressions can affect my perspective of the quality of service I’m getting,” she says.
Smith and her colleagues are also conducting research on mental health AI chatbot apps that are presented as solutions to issues like anxiety and depression, but can sometimes veer off into problematic territory.
“These apps are not regulated by the FDA, and some of the most popular apps are showing alarming characteristics,” says Smith. “So, we’re digging in to identify the risks with the goal of using our results to guide regulatory frameworks.”
The Management Science and Systems Department takes a big-picture look at the AI landscape, and Smith sees AI tools being used all over.
“Everyone I’m talking to who is using ChatGPT and its variants are using it as an accelerator,” she says. “It’s not going to replace a human, but it’s going to speed up a lot of the tasks we do, and allow us to invest that time into something that takes more brain power.”
Cristian Tiu, chair and associate professor of finance, says AI’s accelerator effect is making an impact on finance in several ways — for researchers, practitioners and individuals.
“AI is very good at getting context,” he says of his AI use in academic research. “For instance, it can infer the sentiment of a report beyond just counting positive and negative words like we used to, so now we can better understand the subtleties.”
In addition, Tiu says AI is adept at cleaning data and extracting relevant information from documents, which can be especially useful in analyzing investment strategies or company reports.
AI also can act as a translator of sorts, making personal finance more accessible.
“AI can help individuals by reading and summarizing complex documents like bank contracts, making it easier to understand the key points,” says Tiu.
At financial services company Ally, Ruchi Kumar, MBA ’12, is director of technology program/delivery management. Ally is an all-digital bank, so there are no physical branches for customers to visit and conduct transactions.
This presents some unique challenges, according to Kumar, who spearheaded the launch of its Ally AI last year to improve services while keeping critical financial data safe.
The company first deployed Ally AI last year in their call centers, where customers connect for all their banking services.
“We’ve begun using generative AI to summarize calls, which allows the representative to focus on what the customer needs, rather than on taking notes,” says Kumar. “The results have been very accurate, and it helps us see if the services we offered were aligned with what was expected, and if the customer experience was improved.”
Kumar says that while AI’s influence in finance is growing, its integration is complex due to regulations, data privacy and ethical considerations.
“ChatGPT has been in the news because it’s so easy to use, but if I just go and start connecting all my consumer data there, it will start learning from it, which is a big issue in our space,” she says. “You need to put guardrails in place for different use cases before you allow AI tools to be used within your organization.”
One of the biggest concerns, she says, is ensuring that the output of AI models is nonbiased.
“The model will run what you make it learn, so how do you ensure the things being fed are correct and appropriate — and that there’s no bias in it? Everything with AI is coming in fast, and it’s taking time to figure out, but financial institutions are getting faster and more advanced all the time,” says Kumar. “And that’s important to avoid running into trouble with regulators or losing the trust of your customers.”
As in finance, the use of AI in accounting shows great promise. Firms are already using AI tools to make data-driven decisions, analyze large volumes of data for auditing, automate complex tax compliance tasks and more.
Management PhD student Victoria Gonzalez is conducting research in the school’s Management Science and Systems Department to see how AI can be used to predict accounting fraud.
Using multiple datasets of publicly traded companies, Gonzalez and her colleagues are developing a new prediction model to identify companies that are likely committing fraud.
“I wanted to do research in accounting because it’s an area where we can explore how AI, machine learning and these new language models can be used to achieve new, practical results,” says Gonzalez.
In the wider field of accounting, Gonzalez says she sees AI changing the way accounting works.
“When I left JPMorgan Chase a couple years ago, they were heavily training people in the accounting area to use business analytics tools and software to improve efficiency and make their work easier,” she says. “Auditors were also using it to process information from their clients. That was three to five years ago, and it will be even bigger in the years ahead.”
In his role as senior officer at the U.S. Securities and Exchange Commission, Adam Storch, BS ’02, leads the Event and Emerging Risk Examination Team, which examines new and emerging technologies such as AI.
He says that in July 2023, the SEC proposed new rules that would require broker-dealers and investment advisers to take certain steps to address conflicts of interest associated with their use of predictive data analytics and similar technologies to interact with investors to prevent firms from placing their interests ahead of investors’ interests.
Storch uses his foundation in accounting and business from the School of Management to approach his role at the SEC with a strong grounding in business principles, which he says is essential for understanding the complexities of financial regulations and enforcement. And for new accounting professionals, he says it’s critical to understand AI’s current capabilities and to prepare for the future.
“It’s becoming table stakes to understand this technology, how it works, and its advantages, disadvantages and its potential risks,” says Storch. “It’s an important tool you will be expected to use in the normal course of work that accounting graduates do — conducting audits and providing assurances services — and it’s likely that an increasing number of your clients will be using it, too.”
AI is already transforming the marketing landscape, from improving the efficiency of operations to providing more engaging, personalized experiences for consumers that improve conversion rates and customer loyalty.
Turner Gutmann, MBA ’12, is putting AI to work in sales, finance and marketing at Autodesk, where he is technical program manager of decision science strategy. There, he is focused on scaling data science across the company, which includes assisting the sales and marketing teams to better understand and target customers.
Autodesk is best known for its AutoCAD 2D and 3D modeling software, which aids in the design of everything from buildings and cars to consumer products. Autodesk products are even used to create special effects in Hollywood films.
“We’re using machine learning and algorithms to help surface potential customers to our sales and customer success teams,” says Gutmann. “We find customers who look like they’re ready to buy a different product, customers who may not be using their products enough and those for whom we feel there’s growth potential.
“For a salesperson to do that they’d just have to basically be lucky because they couldn’t brute force their way through the 10 million iterations on more than 400 customer attributes that it took the computer to figure out the most accurate way to predict a customer who’s ready to grow.”
Charles Lindsey, associate professor of marketing, has been studying the impact of AI on marketing and analyzing how it’s already being employed by practitioners.
AI helps make efficient, data-driven decisions in marketing strategies, according to Lindsey, because it can serve up personalized content based on the interactions you’ve had at any point of the marketing funnel: from awareness all the way through to the decision to purchase.
“Based on predictive analytics, if we have two customers at the final consideration stage, one customer is more likely to purchase if they receive one type of message, and the other is more likely with a different type of message,” says Lindsey.
“We’ve known since kindergarten that all people aren’t the same,” he says. “But we’ve never really been able to use all these different data points because it requires a lot of person power. Now, machines do it — and they’re just learning. It’s kind of like your money earning interest while you’re sleeping.”
Looking ahead, Lindsey says it will be critical to put data analytics and AI in the hands of decision makers.
“We need to get away from ‘data-driven decision making’ and flip the script to ‘decision-driven data analysis,’ where leaders tell the data scientists the decisions they have to make on behalf of the firm, and the data scientists support them through AI and machine learning tools,” Lindsey says.
Natural disasters. Cyberattacks. Port strikes. Global warming.
These are just a few of the many factors that can disrupt supply chains — not to mention COVID-19 and how it exposed major weaknesses, most notably for groceries and medical supplies.
Like many other areas of business, companies are starting to use AI to analyze vast amounts of data to help manage their global supply chains, logistics, procurement and manufacturing management
Ananth Iyer, dean and professor of operations management and strategy, says this enhanced data integration can help businesses refine predictions about demand and disruptions.
“With AI, companies can adapt to various external factors such as macroeconomic conditions, competitive products in the market and geopolitical conflicts,” he says. “This allows for more fine-grained management of supply chains, contributing to improved efficiency and responsiveness.”
Manufacturing and supply chain strategist Courtney Weir, MBA ’18, works at a California-based AI supply chain startup, and predicts increased accuracy, streamlined operations and better job satisfaction with the integration of AI.
She envisions a future where AI supports better decision-making and more efficient processes but says the industry has been slow to adopt the technology.
“From my experience, most companies with a supply chain and physical goods struggle to keep up with the pace at which information is changing and they need tools to help them share accurate information seamlessly,” says Weir. “Currently, everyone has their own source of truth from which they’re making decisions. This has caused issues downstream, leading to unnecessary expedite costs, rework and increased inventory levels which is all the more reason to leverage AI to collaborate more effectively.
Weir says AI is a tool that supply chain managers can use to centralize data, maintain audit history, conduct risk-benefit analyses and communicate with suppliers.
“Relationships are the foundation for success in supply chain, and AI will continue to act as a tool to better support those relationships by improving the accuracy of information and timeliness of responses,” she says.
In the School of Management, Iyer says faculty are integrating AI into coursework across disciplines while maintaining those critical personal connections with students.
“We tell students to use it, but understand the limitations,” says Iyer. “What I find particularly interesting to think about is if we can now effectively teach at a higher level because many of the things that were rote are easily available. That's the opportunity for business schools.”