Effective AI adoption demands employee buy-in
Generative AI has the power to radically transform entire industries, but implementing it at scale will require significant changes to the way companies do business. Organizational change is never easy, and most change initiatives ultimately fail. With generative AI, the rapid pace of innovation and widespread fear of the technology heightens the risk of failure. But generative AI doesn’t just have the capacity to replace workers, it can add immense value to their work.
“There is going to be a lot of resistance to implementing AI, and a lot of different reasons for that resistance,” says Yongah Kim, an associate professor of strategic management at the Rotman School of Management. One pocket of resistance is workers’ reluctance to adopt a technology they believe could replace them. But Kim argues AI’s real value to a company is enabling growth. Click here for full text |
Opacity of AI creates ethical challenges for executives
When the dot-com bubble went bust, tech stocks cratered. And a small, money-losing DVD rental company called Netflix got caught up in the fray. In the year 2000, Netflix wasn't a household name. The start-up had not yet gone public, and the market conditions weren’t looking especially favourable for it to do so.
Netflix’s co-founders approached the market-leading video rental company Blockbuster with a US$50-million acquisition proposition. Blockbuster declined, believing digital video was just a fad. They could not have been more wrong, and today, Netflix has more than 260 million subscribers. Blockbuster now has a single location in Bend, Oregon. “Even well-managed companies often resist change,” says Walid Hejazi. “But we live in a world of constant change. Leaders are there to help their organizations navigate it.” Click here for full text |
It will be decades before businesses fully integrate AI into operations
Thomas Edison was granted a patent for the light bulb in 1880. But even though the potential of harnessing electricity was immediately apparent, it was not until the 1920s that 50 per cent of factories were electrified. Companies had to rethink how they operated, and it took decades to reimagine industrial production for a new era of electrification. But once organizations figured it out, the economy was profoundly transformed.
“With artificial intelligence (AI), we are just at the beginning of this process,” says Avi Goldfarb, a professor of marketing and the Rotman chair in artificial intelligence and healthcare. “It kind of feels like we are in the 1880s. We are in ‘the between times’ where we can see the technology’s potential, but have not quite figured out how to make it work.” Click here for full text |
Data scientists write the code that will automate their own jobs
Coal miners will disappear in a clean energy revolution. Taxi dispatchers are being replaced by ride sharing apps. And video rental store employees lost their jobs to Netflix years ago.
History is replete with new technologies that have rendered entire occupations obsolete, turning a lost job into much more than a lost paycheque. Beyond earning a living, our careers weave into how we define ourselves as people—when an entire occupation disappears, pieces of professional and personal identity go with it. For data scientists, this prospect creates an existential crisis. Unlike a taxi dispatcher blindsided by the rapid rise of Uber, data scientists spend their days in high demand, busy writing the code that will automate their own jobs out of existence. Click here for full text |
Social media has hijacked social learning
In middle school, how did you learn what clothes and bands were considered cool? It’s possible someone told you, but there’s a good chance you picked it up simply by watching what your classmates were listening to and wearing. This process, called social learning, is one of the most important ways we make sense of the world around us.
“Humans are natural social learners. We are constantly scanning the environment to figure out what other people are doing and what we can learn from that,” says William Brady, an assistant professor of management and organization. “Social learning happens whenever we observe people, get feedback from them, mimic them, and incorporate this information into our understanding of norms.” Click here for full text |
A better algorithm can connect more volunteers with organizations
Every year in the United States, volunteers perform more than seven billion hours of service—delivering food to seniors, tutoring children, and much more. Platforms like VolunteerMatch, the nation’s largest online volunteer-recruitment resource, help make it happen by connecting volunteers with organizations that are seeking them. The site makes more than a million matches each month.
But even though the platform’s overall output is prolific, VolunteerMatch told Yale SOM’s Vahideh Manshadi, many of the organizations using it still were unable to find the volunteers they needed. And this was not because one organization was somehow more attractive to volunteers than others. Click here for full text |
To overcome bias, hiring algorithms should consider race and gender
Amazon employs hundreds of thousands of people, and when you are building a workforce that’s the size of a medium-sized city, hiring top talent is a gargantuan task. Artificial intelligence holds promise to make human resources more efficient. But when the Seattle-based e-commerce behemoth implemented a machine learning algorithm to identify talent, that algorithm created an unforeseen issue.
“Algorithms like Amazon’s use historical data from existing employees to identify patterns between their characteristics and qualifications, and use these patterns to predict the suitability of job applicants,” says Warut Khern-am-nuai, an assistant professor of information systems at McGill. "They try to use applicants’ characteristics in their CVs to predict qualification. The problem is that it is very possible that there has been some discrimination in the past." Click here for full text |
Rapid pace of technological change presents major challenge to creating archival records
|
WEF white paper is a blueprint for fairer AI in human resources
Bias in human resources has consequences. When one candidate is hired or promoted, other applicants are often simply out of luck. The decisions made in HR have a lasting impact on the complexion of a company’s workforce, and the trajectory of its employees’ careers. Biases of race, class, sex and gender have all contributed to corporate leadership that is mostly white and mostly male.
Artificial intelligence-driven human resources tools have the potential to change this – and to entrench it. Algorithms can disregard the unfamiliar spelling of a candidate’s name, the ethnic or religious affiliation of the educational institution where they studied, and even the gender pronouns they use. But AI still faces a major challenge – it learns from the data of the past, and all human-made HR data sets incorporate the biases of the processes that were used to make them in the first place. Click here for full text |