The role of the human in our technical development and future society


By Sarah Luger | Apr 26, 2022

Editor’s note: This story was first published in November 2022 in our internal newsletter and is being released for all readers as part of our AI April program. Learn more about our two AI April exploratory live panels here (opens in a new window)!

Visit our YouTube channel (opens in a new window) for video from our annual Hello Show, which brought together top executives, thought leaders, venture capitalists, and entrepreneurs for three half-days of technology, innovation, and exploration.

At the 2022 Hello Show hosted by Orange Silicon Valley from Nov 15–17, our community delved into innovative topics and met emerging startups that global thought leaders will be talking about for years. The breadth of ideas presented was staggering; one recurring theme centered on the role of humans.

What is our role as humans in technical innovation? How do we, as humans, evaluate the quality of products and services? Are we using artificial intelligence (AI) to make our lives easier? Is our focus allowing computers and humans to each do what they do best? Orange is committed to building a society based on trust — one of the many human-centric tenets that guide our actions. In fact, “our mission is to ensure that digital services are well thought-out, made available and used in a more caring, inclusive, and sustainable way in all areas of our business.” Exploring the ways innovation might challenge and disrupt human living is part of our work.

During a fireside chat on “Responsible AI” on the second day of the Hello Show, the theme of human-centric design was introduced by Patrick Hall, Principal Scientist at BNH.AI and Visiting Faculty at The George Washington School of Business. Hall had numerous human-computer collaboration insights. He noted in his chat with Orange’s Francois Jezequel that robust AI that won’t subvert the values of a society must begin with design strategies centered on the primacy of the human.

“It’s all about culture. All technology problems are human problems,” Hall emphasized. He continued to discuss the role of regulation in the AI space, noting that there is actually more regulation in AI than most people are aware of — but that to improve, regulation needs to reflect the priorities of different societies. The exchange was a reminder that humans are in control and have ongoing work deciding how we want to build technology that serves us.

The human element was also central to the presentation from Ethereal Matter Founder Scott Summit, titled “Virtual reality and fitness: How will the metaverse improve health?” In his talk about developing augmented reality (AR) for new use cases, such as at-home physical therapy, he noted the challenges of post-injury physical therapy and how he has sought to find ways to use human behavior to increase engagement and the utility of his products.

In a discussion on “State-of-the-art in conversational services,” Chief Scientist of Verint Dr. Ian Beaver continued on the human-first theme. Dr. Beaver joined me in a conversation about how best to deploy AI in call centers to improve customer service experiences and decrease costs.

One challenge is quality assurance, ensuring that automated systems are providing accurate and consistent information. Another challenge is real-time agent assistance, providing agents with tools to help them better serve customers.

Finally, the challenge of forecasting and staffing contact centers was discussed. All of these challenges are human challenges, with AI being used to support humans. We have a crucial role in evaluating the quality of machine performance — one way that AI is changing the nature of call center jobs, not replacing them. The truth that “all technology problems are human problems” will continue to be central to our innovation and the disruptions it causes.

Dr Sarah Luger, Technology Group Principal, Orange Silicon Valley

Sarah believes that consumers should be able to purchase the products they want in the language of their choice. Her professional background blends NLP (Natural Language Processing) engineering and product architecture, responsible AI, conversational AI, emerging machine translation technologies for under resourced languages, and she built several “first-of-a-kind” NLP solutions including automated email and text response experiences that are mainstream today. She leads efforts at Orange Silicon Valley in NLP and Responsible AI. Dr Luger received her BS from Swarthmore College and both a Master of Science and PhD in Informatics from University of Edinburgh in Edinburgh, Scotland.

Email Sarah Luger