Volume 1 (2021-22)

Each volume of Journal of AI, Robotics and Workplace Automation consists of four quarterly 100-page issues. 

The articles and case studies just published in Volume 1 are listed below:

Volume 1 Number 4 - Special Issue: Ethics for artificial intelligence (AI), robotics and workplace automation

  • Editorial - Towards an ethics of AI, robotics and workplace automation
    Christopher Johannessen, Chief Digital Strategist, Axis Group, and Editor, Journal of AI, Robotics & Workplace Automation
  • Editorial board pieces:
    Whose values should be ingrained in algorithms?
    Jim Sterne, Director Emeritus, Digital Analytics Association
  • How AI/ML can combine with RPA to bring workplace automation to the next level, while maintaining governance and ethics
    Lou Bachenheimer, Chief Technology Officer, Americas, SS&C Blue Prism
  • The AI ethics officer and responding to data standards and accountability
    Mary Purk, Managing Director of Wharton’s AI and Analytics for Business Center, University of Pennsylvania
  • Explainable AI
    Murat Açar, PhD, Head of Analytics, Innovation and R&D, ATP (ATA Technology Platforms)
  • Corporate digital responsibility: Dealing with ethical, privacy and fairness challenges of AI
    Professor Jochen Wirtz, Vice Dean, National University Singapore, et al.
  • Architecting AI will improve AI ethics
    Dr. Steven Gustafson, Chief Technology Officer, Noonum, Inc
  • Papers:
    Discriminatory data: Why governments need to view digital privacy as an equity issue
    Albert Gehami, Digital Privacy Officer, City of San José

    Recent US privacy legislation has introduced robust privacy controls on the private sector but has tended to leave governments alone. Many public services require an exchange of privacy for safety and convenience, which can have a disproportionate impact on marginalised communities. This paper argues the need for governments to incorporate digital privacy into their equity strategies and shares the City of San José’s initial approach as a case study. The paper presents three elements through which digital privacy can have an impact on the equity implications of new surveillance technology: purpose, place and accuracy. It concludes that by integrating surveillance technology with privacy and prioritisation of racial equity, governments can create lasting technical systems that provide better, faster, and potentially more affordable services to all communities.
    Keywords: privacy, digital, equity, data, surveillance, smart city, government

  • Why automatic AI ethics evaluations are coming, and how they will work
    James Brusseau, Philosophy Professor, Pace University in New York City and Giovana Meloni Craveiro, Graduate, University of São Paulo

    Ethics evaluations of companies that function with AI at their core are increasingly required by regulation and law in Europe and the US. Investors in artificial intelligence (AI)-intensive companies also seek ethics evaluations as part of the nonfinancial information they gather about corporate performance, especially as it relates to privacy and algorithmic fairness. The result is an increasing demand for the evaluations. The costs and time necessary to perform an AI ethics audit, however, are high, even prohibitive. To solve the problem, natural language processing (NLP) and machine learning (ML) can be employed to automate the process. The proposal is that much of the work of AI evaluating can be accomplished more efficiently by machines than by humans. To show how automated ethics reporting may work, this paper describes a project currently underway at Pace University in New York and the University of Trento in Italy. The project endeavours to apply AI to the task of producing AI ethics evaluations.
    Keywords: AI, AI ethics, ethical investing, AI human impact, fintech, natural language processing

  • Building trust and confidence in AI
    Janet Bastiman, Chief Data Scientist, Napier

    While some industries are rushing to adopt artificial intelligence (AI) technologies, there are those who are lagging behind, due either to their own lack of confidence or to perceived conflicts with regulation or their customer needs. This paper covers some of the myths perpetuated within the AI community regarding trust and confidence and how you can begin to build AI solutions with end user trust as a priority considering the latest regulatory proposals.
    Keywords: trust, confidence, explainability, testing, risk, legislation

  • AI–human interaction: Soft law considerations and application
    Catherine Casey, Chief Growth Officer, Reveal Brainspace, Ariana Dindiyal, Associate and James A. Sherer, Partner, BakerHosterler

    This paper defines the utilisation of ‘soft law’ concepts and structures generally, considers the application of soft law to the perceived gap between artificial intelligence (AI) approaches and normal human behaviours, and subsequently explores the challenges presented by this soft law application. The authors submit that AI is only becoming more prevalent, and increased uses of this technology logically create greater opportunities for ‘friction’ when human norms and AI processes intersect — especially those processes that seek to replace human actions, albeit inconsistently and imperfectly. This paper considers that friction as inevitable, but instead of offering wholesale objections or legal requirement application to AI’s imperfect intrusions into humans’ daily lives, the authors consider ways in which soft law can smooth the path to where we are collectively headed. As human–computer interaction increases, the true role of AI and its back-and-forth with humans on a day-to-day basis is itself rapidly developing into a singular field of study. And while AI has undoubtedly had positive effects on society that lead to efficient outcomes, the development of AI has also presented challenges and risks to that which we consider ‘human’ — risks that call for appropriate protections. To address those concepts, this paper establishes definitions to clarify the discussion and its focus on discrete entities; examines the history of human interaction with AI; evaluates the (in)famous Turing Test; and considers why a gap or ‘uncanny valley’ between normal human behaviour and current AI approaches is unsettling and potentially problematic. It also considers why certain types of disclosure regarding AI matter are appropriate and can assist in addressing the problems that may arise when AI attempts to function as a replacement for ‘human’ activities. Finally, it examines how soft law factors into the equation, filling a need and potentially becoming a necessity. It considers the use-case of how one US legislative body initiated such a process by addressing problems associated with AI and submits that there is a need for additional soft law efforts — one that will persist as AI becomes increasingly important to daily life. In sum, the paper considers whether the uncanny valley is not a challenge so much as a barrier to protect us, and whether soft law might help create or maintain that protection.
    Keywords: artificial intelligence (AI), soft law, Turing Test, uncanny valley, chatbot

  • Developing a conceptual framework for identifying the ethical repercussions of artificial intelligence: A mixed method analysis
    Tahereh Saheb, Assistant Professor, Tarbiat Modares University, Sudha Jamthe, Technology Futurist and Tayebeh Saheb, Dean of Law Faculty, Tarbiat Modares University

    Given that the topic of artificial intelligence (AI) ethics is novel, and many studies are emerging to uncover AI’s ethical challenges, the current study aims to analyse and visualise the research patterns and influential elements in this field. This paper analyses 1,646 Scopus-indexed publications using bibliometric analysis and cluster content analysis. To classify the most prominent elements and delineate the intellectual framework as well as the emerging patterns and gaps, we utilised keyword co-occurrence analysis and bibliographic coupling analysis and network visualisation of authors, countries, sources, documents and institutions. In particular, we detected nine major applications of AI in which ethics of AI is highly discussed, 24 ethical categories and 66 ethical concerns. Using the VOSviewer software, we also identified the general ethical concerns with the greatest total link strength regardless of their cluster associations. Then, focusing on the most recent articles (2020–21), we performed a cluster content analysis of the identified topic clusters and ethical concerns. This analysis guided us in detecting literature gaps and prospective topics and in developing a conceptual framework to illustrate a comprehensive image of ethical AI research trends. This study will assist policymakers, regulators, developers, engineers and researchers in better understanding AI’s ethical challenges and identifying the most pressing concerns that need to be tackled.
    Keywords: artificial intelligence, ethics, privacy, surveillance, bias, accountability, responsibility, autonomy

Volume 1 Number 3

  • Editorial
    Christopher Johannessen, Chief Digital Strategist, Axis Group, and Editor, Journal of AI, Robotics & Workplace Automation
  • Practice papers:
    Lessons learned running AI-powered solutions in production
    Kyle Hansen, Head of AI Engineering, Kingland Systems

    This paper examines lessons learned from running artificial intelligence (AI)- powered solutions in production. It starts by describing the evolving approach the author’s company has developed to gathering training data, followed by its experiences running AI-powered solutions at scale, some particular tips around managing algorithmic complexity and, finally, lessons learned regarding testing and regression. The author also shares insights gained while performing proofs of concept (PoC) for clients. A common theme throughout the paper is the critical role of the subject-matter expert (SME) in automating complex business processes. The team’s SMEs may not even have detailed knowledge of the technology in play, but their expertise in the business domain is crucial to success in these endeavours.
    Keywords: machine learning, enterprise, scaling, algorithms, optimisation, software testing, client interaction

  • Artificial intelligence pitfalls and how to avoid them
    Matt Armstrong-Barnes, Chief Technologist, Hewlett Packard Enterprise

    Artificial intelligence (AI) is a mainstream technology and has become a cornerstone in digital transformation initiatives. Founded on sound academic principles, why do so many projects fail? Getting AI into business pipelines is a relatively new discipline that faces several of the same challenges as traditional software development and many that are specific to the discipline. This paper aims to highlight the common pitfalls that hamper AI projects, from initiatives that are interesting science projects to abstract dreams. Putting the proper controls in place will drive success while achieving the fine balance between killing innovation with too much governance or creating the Wild West with too little. There are many foundations needed to help mitigate many of the common pitfalls and are significant contributors to getting AI over the line and into production in a way that creates market differentiation.
    Keywords: artificial intelligence, implementation, pitfalls, mitigation

  • Ethical AI/ML at Nestlé: From vision to strategy to execution
    Carolina Pinart, Global Product Director and Enrigue Mora, AI Senior Solutions Architect, Nestlé

    Artificial intelligence (AI) will reshape the source of value creation for consumer packaged goods companies, the creation of new business models and the delivery of value-added services such as customisation at scale. This paper outlines the vision of consumer packaged goods company Nestlé about AI and machine learning (ML), provides an overview of the company’s strategy to leverage this technology, and describes the three fundamental pillars to execute on that strategy, that is, to deploy AI at scale: AI/ML as a toolbox to solve business problems, ethical by design AI and a flexible operating model. The paper also provides examples of how Nestlé is leveraging its operating model to scale up and explore high-value use cases, depending on their maturity. For mature use cases, we typically adopt out-of-the-box models, fine-tuned or not to our specific problem, while in the emerging use cases we must conduct proofs of concept to test whether the use case is feasible and to what extent the solution can be industrialised in a near future.
    Keywords: strategy, execution, ethics, artificial intelligence, machine learning

  • How conversational AI is enabling the experience economy
    Laetitia Cailleteau, Managing Director, Accenture, Emer Gilmartin, Engineer and Computational Linguist, ADAPT Centre, Trinity College Dublin and Fuad Hendricks, Senior Manager, Accenture

    The use of conversational AI (CAI), already experiencing strong growth, has accelerated throughout the COVID-19 pandemic. Natural or human language interfaces enable engaging, satisfying and efficient experiences across platforms and sectors. This paper briefly introduces CAI, provides a snapshot of where the technology is, with examples of recent use cases. It then explores the growing area of ethical considerations, best practice and legislation for CAIs, and maps human rights concerns to the technological landscape. It concludes with pointers towards useful, ethically sound applications where users can collaborate easily and flexibly with CAIs.
    Keywords: conversational AI, user experience, ethics, responsible AI, digital literacy, platform architecture

  • Domain knowledge is necessary but not sufficient
    Zoë Webster, AI Director, British Telecom

    This paper discusses the importance of domain knowledge for successful AI projects and illustrates this with examples of where an under-appreciation of domain knowledge has led to unexpected or unwelcome results. In addition, other types of knowledge, also necessary for successful AI development and deployment, are presented.
    Keywords: AI, domain knowledge, domain expert, data science

  • Earning citizen confidence through a comprehensive approach to responsible and trustworthy AI stewardship and governance
    Pamela Isom, Director, U.S. Department of Energy

    Confidence in artificial intelligence (AI) is necessary, given its growing integration in every aspect of life and livelihood. Citizens are sharing both good and unpleasant experiences that have fuelled opinions of AI as an emerging, advantageous capability while also expressing an abundance of concerns that we must address. Clean energy scientific discoveries, the supply of autonomous vehicles that perform with zero carbon emissions, the rapid discovery of chemicals and/or anomalies that generate medicinal value, or the integration of AI in human resource processes for accelerated efficiencies are examples of AI use cases that can save lives and do so at the speed of urgency. The concerns and challenges are to ensure that models, algorithms, data and humans — the whole AI — are secure, responsible and ethical. In addition, there must be accountability for safety and civil equity and inclusion across the entire AI life cycle. With these factors in action, risks are managed and AI is trustworthy. This paper considers existing policy directives that are relevant for managing risks across the AI life cycle and provides further perspectives and practices to advance implementation.
    Keywords: confidence, responsible AI, AI ethics, explainable, trustworthy AI, AI life, cycle, governance, stewardship, AI risk management, AI assurance, AI-IV&V

  • Destination digital? Using AI to enhance the customer experience
    Jo Causon, CEO, The Institute of Customer Service

    Organisations are looking to harness artificial intelligence (AI) to deliver a more efficient, automated and cost-effective customer experience. The COVID-19 pandemic, Brexit and a critical shortage of key skills are accelerating and broadening the use of digital channels, automation and AI as businesses look to adjust to the shifting landscape. This paper draws on two pieces of research by The Institute of Customer Service and considers the impact that future working practices, evolving customer behaviour and rapid developments in AI will have on service. It seeks to provide a window on the differing perspectives of senior executives, managers, employees and customers to highlight the service-related opportunities AI presents and the key issues to consider. The research suggests that AI has enormous potential to improve the customer experience and streamline many interactions. AI can be deployed to both replace and augment human service professionals, but there are some complex areas to navigate. Trust and transparency are crucial — and there is a big gap between the ways customers say they are comfortable with AI being deployed and the way organisations currently use it or plan to use it in the future.
    Keywords: customer, customer service, customer experience, customer behaviour, CX

  • Inclusive ethical AI in human computer interaction in autonomous vehicles
    Sudha Jamthe, Technology Futurist, Yashaswini Viswanath, Resident Researcher, Business School of AI and Suresh Lokiah, Engineering Leader, Xperi

    Artificial intelligence (AI) used in autonomous vehicles (AV) to check for driver alertness is a critical piece of technology that makes the decision to hand over control to the human if there is a disengagement of the autonomous capability. It is important that this AI be inclusive without bias, because treating drivers differently will have an impact on the safety of humans not only in the vehicle but also on the road. This paper evaluates the AI that powers driver attention systems in the car to check if the AI treats all humans inclusively the same way beyond their ethnicity, gender and age, and whether it follows AI ethics principles of trust, transparency and fairness. Driver attention is built using two different AI models. One uses camera data to recognise humans and the other evaluates whether the human is alert. We found that both these AI models are biased and not inclusive of all people in all situations. We also found that there are unethical practices in how humans are tracked to check for alertness by using infrared sensors that track their retina movements without any concept of consent or privacy of people being tracked in the vehicles. The paper builds upon prior research on face detection outside the car and research that shows that car cognition AI does not recognise all humans on the road equally. We present research results about how the car is biased against some humans in its face identification and how the assertion of alertness of humans to hand over control during an emergency is fundamentally flawed in its definition of alertness. We recommend mitigation techniques and call for further research to build upon our work to make the AV inclusive with bias mitigation of bias in all forms of AI in AVs.
    Keywords: autonomous cars, self-driving cars, driverless cars, bias, technology, robotaxi, driverless world, full autonomy, autonomous vehicles, ADAS, connected vehicles, data in the car, AI in the car, data, artificial intelligence, AI ethics, inclusive AI

Volume 1 Number 2

  • Editorial
    Christopher Johannessen, Chief Digital Strategist, Axis Group, and Editor, Journal of AI, Robotics & Workplace Automation
  • Practice papers:
    Processes and decision automation for financial markets trade surveillance: Challenges and recommendations — next-generation solutions
    Cristina Soviany, CEO, Features Analytics and Sorin Soviany, Senior Researcher, National Institute for Research and Development in Informatics-ICI

    This paper presents the challenges institutions encounter when implementing or deploying market surveillance solutions. A guidance framework is provided to design and deploy solutions that can overcome the current challenges. The support technologies that should be used include artificial intelligence (AI) techniques, advanced data science and statistics. The next-generation surveillance solutions should make use of these technologies that provide a robust framework for the detection of any market abuse, both known and new emerging patterns, while reducing the operational cost.
    Keywords: trade surveillance, market abuse, financial instruments, automation, artificial intelligence (AI), explainable, proactive

  • The automation of marketing
    Andrew W. Pearson, Founder and Managing Director, Intelligencia Limited

    Today, the average campaign response rate is less than one per cent and with the emergence of artificial intelligence (AI), traditional marketing is wading into troubled waters because the waves of strategic campaigns, the retargeting, the real-time media buying and personalised e-mailing are not moving the marketing needle as they used to. The robotisation of shopping and marketing is changing how brands compete for consumers. This paper stipulates that the real opportunity in marketing today lies in redefining the customer relationship rather than in cutting costs. In the future, humans might only be needed in the consumption phase of the buying cycle and purchasing decisions could be left to IoT-connected bots that order products for the customer. These products will then be delivered by anticipatory systems that understand when orders are to be made. For companies to succeed in this environment, they need to make the marketing, ordering and delivering process as seamless as possible, not just for the humans who will consume the products but also for the bots that will order them.
    Keywords: artificial intelligence, customer engagement, personalisation, machine learning, anticipatory shipping, customer relationship management, automated shopping, behavioural marketing

  • From form to function and appeal: Increasing workplace adoption of AI through a functional framework and persona-based approach
    John W. Showalter, Adjunct Professor, George Washington University and Grace L. Showalter, Director of Clinical Transformation, AccentCare

    Traditional methods of developing and implementing artificial intelligence (AI) inhibit widespread workplace adoption, because the development of AI has focused on advancing existing, and discovering new, technologies rather than solving industry problems. This paper discusses how, to create scalable and sustainable AI adoption, form must follow function, rather than function being driven by form. This requires a new framework for understanding AI that focuses on the function of the solution rather than the form of the technology. A functional framework for AI categorises solutions by human impact on tasks and decisions: automating AI eliminates human effort; augmenting AI improves human efficacy; accelerating AI transforms systems to increase human efficiency. A human-centred understanding of AI facilitates a persona-based approach to implementation and adoption. Robust personas of target end users can be created by understanding their preferred learning styles using Kolb’s experiential learning theory (KELT) and identifying the elements of motivation that empower them to change using the theory of planned behaviour (TPB). Layering KELT and TPB on top of the functional AI framework allows for the creation of a significance matrix to understand natural synergy or discord that exists between AI solutions and target end users. In addition to the significance matrix, personas must identify and define value for target end users, which combines with other elements to create appeal. Appeal can be leveraged to create scalable implementation and adoption plans that function across industries and exploit natural synergies. Healthcare industry examples are provided to demonstrate the overlay of the functional AI framework with KELT and TPB, along with persona-defined value, to drive adoption. Strategies for mitigating discordance between AI solutions and end users, and increasing appeal, are described.
    Keywords: artificial intelligence (AI), healthcare sector, healthcare, workplace, motivation, persona, value, adoption

  • The future of AI: Generational tendencies related to decision processing
    Christopher S. Kayser, Founder, President and CEO of Cybercrime Analytics, and Robert Cadigan, Associate Professor Emeritus of Applied Social Sciences, Boston University

    Advances in artificial intelligence (AI) have resulted in the automation of human-based decision processing and have become entwined with almost every aspect of our lives. While advantageous in many respects, when conditions permit a decision to take place related to the acceptance, adoption or rejection of embracing AI into one’s everyday life, many elect not to do so. Such decisions can be based on a lack of knowledge of how to determine the benefits of such modernisation of thought but can also be the result of specific tendencies associated with different generations. This paper examines three generations — Baby Boomers, Gen Xers and Millennials (born 1946 to 1994; reaching adulthood 1967 to 2015) — who collectively participated in nearly a half-century of some of the most significant technological advances in history. These changes contributed to each of these generations’ understanding of, comfort with, and decision making that ultimately determines their attitude toward and rate of adoption of AI. In light of Bourdieu’s theory of practice, we examine social models and theories of innovation to better understand decisions associated with each generation regarding their attitudes related to AI — primarily based on their interpretation of perceived benefits offered by such advancements in technology.
    Keywords: artificial intelligence (AI), Baby Boomers, decision processing, generational, Gen Xers, Millennials, social theories

  • Step by step: A playbook for AI innovation within commercial real estate organisations
    Patrick McGrath, Chief Information Officer and Head of Client Technologies, Savills, et al.

    Commercial real estate (CRE) lease agreements contain a treasure trove of untapped insights into one of the world’s largest asset classes. With commercial leases being the legal instrument for executing and documenting the terms and conditions of hundreds of billions of dollars of CRE transactions each year, the aggregate data housed in these agreements has the potential to inform valuable insights across the world’s largest real estate markets. Leases traditionally exist, however, only as paper copies or electronic PDFs, requiring large investments of time and human capital to read, understand, harvest and structure the data from their pages. CRE companies are in a unique position to leverage artificial intelligence (AI) and machine learning (ML) techniques to crack the code on these leases and unlock significant value through digital insights across the marketplace. Pioneering companies will need to surmount several obstacles that have long deterred CRE organisations from embracing these automated workflows. The sheer complexity and variety of the documents’ formats, structure and terms, the disaggregation of these documents among market participants, and the general disorganisation of the participants’ systems for storing them are all serious challenges. Meanwhile, the investment of time and resources necessary to train an algorithm sophisticated enough to navigate these challenges is a significant ask. This paper provides a playbook of best practices for surmounting these obstacles and achieving successful integration of ML–AI techniques in the CRE industry, enabling internal efficiencies and new avenues of value creation. The paper analyses a case where a CRE service provider was able to get buy-in from stakeholders by defining a project roadmap focused on upskilling pre-existing human capital investments, ultimately creating a business case to leverage ML–AI techniques to enhance data structuring workflows. The results of this project showed that the real value derived from these technologies did not come from the outputs or cost savings alone. The test project created a competitive advantage for the company by pairing the technology with a skilled team. The team brought a ‘product mindset’ focused not only on learning and developing the technology, but on continually finding new and better ways to use the technology to create a valuable service offering for occupier clients.
    Keywords: commercial real estate (CRE), artificial intelligence (AI), data structuring

  • Four frontiers of AI ethics
    Sarah Khatry, Student, University of Iowa

    The field of AI ethics is still solidifying, and is beginning to have tangible effects on the direction of innovation in the technology. This paper identifies four ethical frontiers that are likely to shape the future of AI. The first is the recognition of contexts represented by the data and culture of an AI system’s development, which can be a challenge to the aim of using AI to bring about greater prosperity and equity globally. Next, some of the difficulties and discoveries of investigations into AI explainability are discussed. The vulnerability of AI’s reliance on large amounts of data in the advent of data rights is described, along with the innovative solutions currently being explored to support secure, privacy-preserving AI. Finally, the two fields that best exemplify the ethics of AI-supported decision making serve as case studies into autonomy and accountability: medicine and warfare.
    Keywords: AI ethics, explainability, transparency, privacy, data rights, decision support

  • Applying AI in anti-money laundering operations
    Arin Ray, Senior Analyst, Celent

    Although stakes of financial crime compliance operations have risen greatly, traditional technology used in compliance operations has reached an impasse. Rules-based technology and siloed operations are proving to be inadequate in detecting hidden risks, and financial institutions are drowning in alerts. A flood of false positives and heavy reliance on manual processes are making anti-money laundering (AML) programs costly, inefficient and unsustainable. This paper discusses how artificial intelligence (AI) and machine learning (ML) powered solutions have the potential to solve the current challenges in AML. They can enable institutions to adopt a more informed and risk-based approach and ensure that most critical attributes and scenarios are fed into a detection engine that has finely tuned parameters and thresholds. They can also help generate optimal number of high-quality alerts that are prioritised according to risk. Case management efficiency and effectiveness can be enhanced by incorporating AI and ML techniques while learnings can be fed back into all stages for continuous improvements. The paper analyses how these will allow financial institutions to manage money laundering risks proactively and holistically, reducing costs, inefficiencies and chances of fines. AML departments at financial institutions have started dipping their toes in the pool of advanced analytics. Typically, they start with tactical AI adoption in one or a few areas; success in early stages should expedite further adoption.
    Keywords: artificial intelligence (AI), machine learning (ML), know your customer (KYC), anti-money laundering (AML), transaction monitoring, watchlist screening, suspicious activity report (SAR)

  • Dilemmas of digital labour
    Anna Aurora Wennäkoski, PhD student, University of Helsinki

    Digital labour raises many questions, including how to measure quality, efficiency and cost, but also regarding people, ranging from skills and labour costs to salaries and unionisation. This paper aims to highlight some pertinent threads and arguments about technological disruptions to workforce automation and robotisation. By looking at different postures presented, especially in policy and literature, it aims to break down some of the most egregious speculations related to robotisation and automation and distil more specifically what kind of new needs are emerging, rather than perceiving these developments as existential threats.
    Keywords: digital labour, labour costs, technological disruptions, workforce automation and robotisation, robotisation and automation

Volume 1 Number 1

  • Editorial
    Thomas H. Davenport, Distinguished Professor, Babson College, Research Fellow, MIT Center for Digital Business and Senior Advisor, Deloitte Analytics
  • Practice papers:
    The path to AI in procurement
    Philip Morgan, Senior Director, Electronic Arts

    What is artificial intelligence (AI) in procurement and how do we get there? This paper provides definitions for the various stages of the development of AI in procurement, and a practical guide on how to achieve the meaningful application of AI in procurement practices, from identifying the opportunities for AI that are right for an organisation, to preparing policies and practices before implementing an AI solution, to deploying intermediate steps in automation and reporting, and ultimately on how to reach the goal of meaningfully functional AI in the support of live procurement operations.
    Keywords: procurement, sourcing, continuum, intelligent automation, smart search, application, chatbot

  • How to kickstart an AI venture without proprietary data: AI start-ups have a chicken and egg problem — here is how to solve it
    Kartik Hosanagar, Professor, Wharton School of the University of Pennsylvania and Monisha Gulabani, Research Assistant, Wharton AI for Business

    Even when entrepreneurs have innovative ideas for applying AI to real-world problems, they can encounter a unique challenge to kickstarting their AI ventures. Today’s AI systems need to be trained on large datasets, which poses a chicken-and-egg problem for entrepreneurs. Established companies with a sizable customer base already have a stream of data from which they can train AI systems, build new products and enhance existing ones, generate additional data, and rinse and repeat. Entrepreneurs have not yet built their company, so they do not have data, which means they cannot create an AI product as easily; however, this challenge can be navigated with a strategic approach. This paper presents five strategies that can help entrepreneurs access the data they need to break into the AI space, as well as examples of how these strategies have been used by other companies, particularly in their early stages. Specifically, the paper discusses how entrepreneurs can: 1) start by offering a service that has value without AI and that generates data; 2) partner with a non-tech company that has a proprietary dataset; 3) crowdsource the (labelled) data they need; 4) make use of public data; and 5) rethink the need for data entirely and instead use expert systems or reinforcement learning to kickstart their AI ventures.
    Keywords: artificial intelligence (AI), machine learning (ML), data, entrepreneurship, innovation, AI start-ups, technology

  • Towards a capability assessment model for the comprehension and adoption of AI in organisations
    Tom Butler, Professor, Angelina Espinoza-Limón, Research Fellow and Selja Seppälä, Research Fellow, University College Cork, Ireland

    The comprehension and adoption of artificial intelligence (AI) are beset with practical and ethical problems. This paper presents a five-level AI capability assessment model (AI-CAM) and a related AI capabilities matrix (AI-CM) to assist practitioners in AI comprehension and adoption. These practical tools were developed with business executives, technologists and other organisational stakeholders in mind. They are founded on a comprehensive conception of AI compared with those in other AI adoption models and are also open-source artefacts. Thus, the AI-CAM and AI-CM present an accessible resource to help inform organisational decision makers on the capability requirements for: 1) AI-based data analytics use cases based on machine learning technologies; 2) knowledge representation to engineer and represent data, information and knowledge using semantic technologies; and 3) AI-based solutions that seek to emulate human reasoning and decision making. The AI-CAM covers the core capability dimensions (business, data, technology, organisation, AI skills, risks and ethical considerations) required at the five levels of capability maturity to achieve optimal use of AI in organisations. The AI-CM details the related individual and team-level capabilities needed to reach each level in organisational AI capability; it therefore extends and enriches existing perspectives by introducing knowledge and skills requirements at all levels of an organisation. It posits three levels of AI proficiency: 1) basic, for operational users who interact with AI and participate in AI adoption; 2) advanced, for professionals who are charged with comprehending AI and developing related business models and strategies; and 3) expert, for computer engineers, data scientists and knowledge engineers participating in the design and implementation of AI-based technologies to support business use cases. In conclusion, the AI-CAM and AI-CM present a valuable resource for practitioners, businesses and technologists looking to innovate using AI technologies and maximise the return to their organisations.
    Keywords: artificial intelligence, AI, capability assessment model, AI adoption, AI skills, AI capabilities, AI literacy

  • The path to autonomous driving
    Sudha Jamthe, Technology Futurist and Ananya Sen, Product Manager and Software Engineer

    We are in 2021, the year many automakers had promised self-driving cars. The reality is that as we get ready to reclaim our mobility that was limited because of the COVID-19 pandemic, there is still no agreement among industry experts on when we will reach full autonomous driving. The pandemic has created a shift in the market drivers, however, influencing the path for consumer adoption of vehicle autonomy, new business models using Level 4 instead of Level 5 and increased digitisation with data and artificial intelligence (AI) in vehicles. This paper analyses shifts in five key market drivers: technology, consumer adoption, business model shifts, data platforms in vehicles, adoption of electric vehicles and advanced driver assist systems (ADAS) features by carmakers, and proposes three potential paths towards autonomous driving.
    Keywords: autonomous cars, self-driving cars, driverless cars, technology and driverless world, full autonomy, autonomous vehicles, advanced driver assist systems (ADAS), connected vehicles, data in the car, AI in the car, data, artificial intelligence (AI)

  • Point of no return: Turning data into value
    Jochen Werne, Chief Visionary Officer, Prosegur and Johannes Winter, Managing Director, Artificial Intelligence Platform

    The Cambridge Dictionary defines the point of no return as the stage at which it is no longer possible to stop what you are doing, and when its effects cannot now be avoided or prevented. Exponential advances in technology have led to a global race for dominance in politically, militarily and economically strategic technologies such as 5G, artificial intelligence (AI) and digital platforms. A reversal of this status quo is hardly conceivable. Based on this assumption, this paper looks to the future, adding the lessons of recent years — the years when the point of no return was passed. In addition, the paper uses practical examples from different industries to show how digital transformation can be successfully undergone and provides six key questions that every company should ask itself in the digital age.
    Keywords: business model innovation, business process re-engineering, data analytics, artificial intelligence (AI), platform economy, use case, digital transformation

  • Robotic process automation and the power of automation in the workplace
    Raj Samra, Senior Manager, PwC

    Robotic process automation (RPA) is a powerful tool for performing manual, time-consuming, rules-based office activities more effectively by reducing cycle time and at lower costs than other automation solutions. RPA can have a major effect on a company’s operations and strategic positioning and serves as a base for machine learning (ML), artificial intelligence (AI) and the creation of a more autonomous sector. This paper discusses how two of RPA’s most significant advantages — ease of deployment and improved speed and agility — are often overlooked. The paper advises business leaders to concentrate on automating as much as possible, focusing on front-end processes, optimising efficiency and striving for auditability.
    Keywords: Robotic process automation (RPA), cycle time, operations, strategic positioning, machine learning (ML), artificial intelligence (AI), automating, front-end processes

  • Difficult decisions in uncertain times: AI and automation in commercial lending
    Sean Hunter, Chief Information Officer and Onur Güzey, Head of Artificial Intelligence, OakNorth

    Progress in artificial intelligence (AI) and automation has improved many parts of financial services. These techniques have struggled, however, to make inroads in many areas of commercial lending, largely because of the relative unavailability of sufficient data. Traditional techniques of extrapolation from historical data are also inadequate in times of significant disruption (such as the COVID-19 pandemic). In this paper we discuss these challenges and present techniques such as driver analysis, nowcasting and the use of AI to enable granular subsector classification and forecasting. These allow greater use of data-driven AI-augmented decision making even where full decision automation is not necessarily possible or desirable. Finally, we examine the case study of OakNorth Bank in the UK, which has used these techniques to achieve very promising results since its launch in 2015.
    Keywords: nowcasting, commercial lending, driver analysis, small to medium enterprises (SME)

  • The intelligent, experiential and competitive workplace: Part 1
    Peter Miscovich, Managing Director, Strategy + Innovation, JLL

    This two-part paper explores how intelligent automation and the convergence of accelerating technology advancements will shape the future of work and transform the workforce and the workplace of the future. Part 1 examines key intelligent automation advances and challenges with an overview of the emerging workforce and workplace models. Part 2 assesses the impact of intelligent automation and artificial intelligence (AI) upon the workforce and the workplace in greater depth, as well as the societal impacts to consider as these advancing technologies transform business, society and life itself. The paper begins with the premise that the business world is at a major inflection point whereby more businesses than not have completed their first phase of digital transformation. The COVID-19 pandemic accelerated AI, robotics, workplace automation and digital transformation initiatives that were already well underway. Trends that have been gathering momentum for years, such as workplace mobility and diverse ‘hybrid workplace’ behaviours, have rapidly gained adoption to now become mainstream. Just as the office is becoming an ecosystem of workplace options, the workforce is becoming increasingly ‘liquid’ and distributed; the ‘human cloud’ continues to evolve as many organisations turn to contract, on-demand, highly flexible and elastic labour models. Digital workplace technologies — from meeting solution software to enterprise chat platforms and desktop-as-a-service — have enabled the adoption of remote working and creation of workplace ecosystems inclusive of flexible ‘hybrid’ workplaces that can accommodate working in the office, at home or anywhere. As the post-digital era advances, the convergence of AI, robotics, workplace automation and virtual/augmented/extended reality (VR/AR/XR) technologies and 5G mobile networks will enable completely new ways of working and accelerate societal transformation. Digital technologies will enable rich, immersive and distributed virtual collaboration that will power new levels of human performance. The next phase of digital transformation will be driven by businesses willing to make AI investments to improve their competitive advantage. Over the next decade, AI will offer employees unprecedented information awareness and insight, providing greater freedom from low-value-add activities and the ability to easily adopt and use these emerging complex technologies. In the future of work, AI and immersive XR technologies will lead to greater levels of human–machine collaboration; however, policymakers, public and private organisations will need to address the risks and challenges of an increasingly AI-enabled digital world. New ways of working will offer the promise of unlocking greater human potential and may lead to some worker displacement, as well as intensifying the demand for greater workforce reskilling and continuous lifelong learning. Increasingly sophisticated AI applications, including facial recognition and deep learning neural networks, will provide new insights to address complex business problems and societal challenges. These very same advanced AI applications will also raise difficult questions regarding transparency, ethics, equity and privacy.
    Keywords: digital transformation, intelligent automation, experiential workplace, virtual/extended reality, telepresence, immersive collaborative platform technologies, human–machine collaboration, robotic process automation (RPA)

  • Responding to ethics being a data protection building block for AI
    Henry Chang, Adjunct Associate Professor, University of Hong Kong

    As a key driver for the Fourth Industrial Revolution, AI has increasing effects on all areas of human lives. AI utilises and interacts with a large volume of data, including personal data or data related to individuals, which inevitably raises privacy and data protection concerns. Data protection authorities (DPAs) continue to stress that AI must comply with a set of data protection principles that were first introduced nearly half of a century ago, while acknowledging these principles’ limitations in protecting individual rights under AI. These limitations include bias, discrimination, a sense of losing control, threat of surveillance, fears of erosion of choice and free will, etc. Before a set of AI-proven data protection principles can be developed and agreed internationally, DPAs have turned to ethics as an interim measure. Unlike data protection principles, however, ethics are elusive to demonstrate at best and potentially impossible to agree upon at worst. This paper explains what issues are facing AI that led to the use of ethics as a data protection building block by DPAs. It then surveys worldwide effects in providing ethical guidance related to AI and identifies ethical impact assessment (EIA) as a way to demonstrate commitment. At the same time, the practical disjoint and reasons between knowing ethics and acting ethically are elaborated to illustrate the challenges. The paper concludes with discussions on how AI practitioners should continuously monitor public sentiment, government initiatives and regulatory frameworks, and take proactive action in conducting EIA to demonstrate their commitment to and respect of ethics.
    Keywords: artificial Intelligence, data analytics, data protection, privacy, ethics

  • Legal issues arising from the use of artificial intelligence in government tax administration and decision making
    Elizabeth Bishop, Barrister, Ground Floor Wentworth Chambers

    This paper examines the issues that arise when government administrative decisions, such as those relating to taxation and social security, are aided wholly or partially by artificial intelligence (AI). As we move further into the digital age, AI is increasingly adopted by the public sector to achieve greater accuracy of analysis and resource efficiency, but it is evident through instances such as the ‘Robodebt’ debacle that there is room for (non-human) error. In administrative law, such decisions can be reviewed judicially or on the merits of the case; however, the difference between error caused by a human delegate and an automated delegate is that we are not yet able to comprehend the reasoning steps undertaken by automated decision makers. This paper highlights the arising need to define what constitutes a decision (and a decision maker) for administrative law purposes, and addresses the implications of when decisions made predominantly by AI are erroneous or cause harm to taxpayers.
    Keywords: taxation, social security, artificial intelligence (AI), public sector, Robodebt, administrative law, automated decision makers, taxpayers