"My ambition for the Journal of Data Protection & Privacy is for it to further develop the foundations, tools and methodologies for Privacy by Design, and to help getting them applied in practice."
Volume 3 (2024)
Each volume of Journal of AI, Robotics and Workplace Automation consists of four 100-page issues published both in print and online.
Articles published in Volume 3 include:
Volume 3 Number 2
-
Editorial
AI, robotics and automation and the impact on learning and the discovery of new ideas
Chris Johannessen, Axis Group and Editor, Journal of AI, Robotics and Workplace Automation -
Practice Papers
The theoretical potential of algorithmic and automated pricing to increase company value
Peter Rathnow, Rathnow Consulting, Benjamin Zeller, Ivoclar Vivadent and Matthias Lederer, OTH Technical University of Applied Sciences Amberg-Weiden
This paper explores the potential benefits and challenges of algorithmic and automated pricing for businesses and critically examines the associated ethical and legal implications. To this end, in this first part of a two-part study, seven areas of discussion were identified in which the future of algorithmic pricing can be described in an overall view (eg data, resource allocation). Within these areas, a total of nine central hypotheses (eg what effects it will have on competitive situations and the development of user acceptance) were developed on the basis of current scientific findings. The overall study takes a customer-centric approach and proposes that algorithmic pricing should be seen not only as a tool to maximise profits, but as a strategy that creates added value for both the company and the consumer. In the second part of the study (to be published in the next issue of the journal), the hypotheses developed in this paper will be evaluated by experts in the field.
Keywords: algorithmic pricing; dynamic pricing; shareholder value; customer centricity; profit maximisation; value creation -
Towards hybrid self-learning ontologies: A new Python module for closed-loop integration of decision trees and OWL
Simon J. Preis, Weiden Business School
Ontologies are a valuable tool for organising and representing knowledge. Decision trees (DTs) are a machine learning (ML) technique that can be used to learn from data. The integration of DTs and ontologies has the potential to improve the performance of both technologies. This paper proposes a novel approach to integrating DTs and ontologies. For that purpose, a new Python module is developed and validated to demonstrate this approach. The dt2swrl module has been evaluated against two research questions (RQs). RQ1 asked whether existing Python packages are capable of integrating DT rules with ontologies. The investigation showed that none of the existing packages can achieve the defined goals. RQ2 asked whether a generic module can be developed to automatically integrate DT rules with ontologies. The dt2swrl module has been developed to address this gap and it has been shown to be effective in achieving the desired goals. It can be used to develop hybrid ontologies that seamlessly integrate expert-based and data-based rules, and self-learning ontologies that can automatically maintain their rules based on new data. The paper concludes by discussing the limitations of the dt2swrl module and the implications of the research for future work.
Keywords: ontologies; OWL; SWRL; decision tree; Python -
Enhancing transparency in AI-powered customer engagement
Tara DeZao, Pegasystems
This paper addresses the critical challenge of building consumer trust in AI-powered customer engagement by emphasising the necessity for transparency and accountability. Despite the potential of AI to revolutionise business operations and enhance customer experiences, widespread concerns about misinformation and the opacity of AI decision-making processes hinder trust. Surveys highlight a significant lack of awareness among consumers regarding their interactions with AI, alongside apprehensions about bias and fairness in AI algorithms. The paper advocates for the development of explainable AI models that are transparent and understandable to both consumers and organisational leaders, thereby mitigating potential biases and ensuring ethical use. It underscores the importance of organisational commitment to transparency practices beyond mere regulatory compliance, including fostering a culture of accountability, prioritising clear data policies and maintaining active engagement with stakeholders. By adopting a holistic approach to transparency and explainability, businesses can cultivate trust in AI technologies, bridging the gap between technological innovation and consumer acceptance, and paving the way for more ethical and effective AI-powered customer engagements.
Keywords: artificial intelligence (AI); transparency; consumer trust; ethical concerns; AI explainability; organisational accountability; regulatory compliance -
Information retrieval from textual data: Harnessing large language models, retrieval augmented generation and prompt engineering
Asen Hikov and Laura Murphy, Amplify Analytix
This paper describes how recent advancements in the field of Generative AI (GenAI), and more specifically large language models (LLMs), are incorporated into a practical application that solves the widespread and relevant business problem of information retrieval from textual data in PDF format: searching through legal texts, financial reports, research articles and so on. Marketing research, for example, often requires reading through hundreds of pages of financial reports to extract key information for research on competitors, partners, markets and prospective clients. It is a manual, error-prone and time-consuming task for marketers, where until recently there was little scope for automation, optimisation and scaling. The application we have developed combines LLMs with a retrieval augmented generation (RAG) architecture and prompt engineering to make this process more efficient. We have developed a chatbot that allows the user to upload multiple PDF documents and obtain a summary of predefined key areas as well as to ask specific questions and get answers from the combined documents’ content. The application’s architecture begins with the creation of an index for each of the PDF files. This index includes embedding the textual content and constructing a vector store. A query engine, employing a small-to-big retrieval method, is then used to accurately respond to a set of predefined questions for each PDF to create the summary. The prompt has been designed in a manner that minimises the risk of hallucination which is common in this type of model. The user interacts with the model via a chatbot feature. It utilises similar small-to-big retrieval techniques over the indices for straightforward queries, and a more complex sub-questions engine for in-depth analysis, providing a comprehensive and interactive tool for document analysis. We have estimated that the implementation of this tool would reduce the time spent on manual research tasks by around 60 per cent, based on the discussions we have had with potential users.
Keywords: RAG architecture; LLM; PDF parsing; query engine -
Digital transformation with robo-advising and AI in asset management: A critical appraisal
Jörg Orgeldinger, Fernuni Hagen
After the two revolutions in finance over the last century — the efficient market mathematical finance revolution in the 1950s and the behavioural finance revolution in the 1970s —now the third finance revolution with ‘machine finance’ has begun. Robo-advising combines artificial intelligence (AI) and financial expertise to offer accessible and personalised financial guidance. It analyses data, optimises portfolios and provides lower-cost investment strategies. Robo-advisors automate tasks such as portfolio rebalancing and offer efficiency and rational decision making; however, concerns regarding data privacy, algorithmic bias and regulations need attention. This paper explores the benefits, challenges and regulatory landscape of robo-advising. It emphasises the support robo-advisory services provide in minimising risk, generating returns and maintaining portfolios. They offer an efficient alternative to traditional advisors, using technology and expertise for personalised investment guidance. Further, the paper describes the fee structure of robo-advisors and the benefits they offer to financial advisors. It explains that traditional financial advisors typically charge fees ranging from 1 to 1.5 per cent of total assets managed, while robo-advisors charge between 0 and 0.25 per cent for basic services. The paper also explores the impact of fees on investment performance and introduces the concept of robo-advisors using index exchange-traded funds (ETFs) to minimise fees. It further explains the two main types of robo-advisors: hybrid and pure. Additionally, the paper provides examples of prominent robo-advisor providers and their assets under management. It concludes by highlighting the potential influence of AI on robo-advising, including data analysis, personalised recommendations and improved communication with investors.
Keywords: digital finance; robo-advising; AI; machine finance -
Augmented reality in smart retail for inventory management
Sandeep Shekhawat, Walmart Global Tech
The ever-growing complexity of supply chains and the dynamic nature of modern business environments demand innovative solutions for efficient and cost-effective inventory management. This paper introduces a novel approach to address these challenges through the integration of augmented reality (AR) technology. Our proposed system leverages AR to enhance the accuracy, speed and adaptability of inventory management processes while maintaining a focus on cost-effectiveness. By overlaying digital information onto the physical world, AR transforms the way inventory is monitored, tracked and updated. The system utilises wearable AR devices, such as smart glasses, to provide real-time insights into inventory levels, product locations and order fulfilment status. This not only reduces manual errors but also significantly improves the overall efficiency of inventory-related tasks. To validate the effectiveness of our approach, we conducted a series of simulations and case studies across diverse industries. The results demonstrate notable improvements in inventory accuracy, order fulfilment speed and overall operational efficiency. The proposed AR-based inventory management system emerges as a viable and cost-effective solution, poised to revolutionise traditional inventory management practices in the contemporary business landscape.
Keywords: augmented reality (AR); inventory management; supply chain; cost- effectiveness; ArUco markers; stocking shifts -
Are businesses giving AI the support it needs to grow and scale?
Nathalie Zeghmouli, Strategic Business Development Manager
Artificial intelligence (AI) is rapidly transforming industries, and financial services are no exception. This paper explores how AI can revolutionise both customer interactions and back-office operations within banks and insurance companies. AI-powered tools like chatbots and recommendation engines can answer questions 24/7, identify customer trends and suggest personalised products, all of which can significantly improve customer satisfaction. In the back office, AI automates repetitive tasks such as data analysis and document processing. This frees up human employees to focus on more complex and strategic activities, while also leading to cost savings and increased efficiency. However, implementing AI in financial services comes with its own set of challenges. Businesses must continuously invest in AI to keep pace with rapid technological advancements. Regulatory frameworks are struggling to adapt to the evolving nature of AI, which creates uncertainty for businesses. Additionally, ensuring transparency and fairness in AI decision making, especially with complex algorithms, is a major hurdle. The quality of data used to train AI models is also crucial. Biased data can lead to discriminatory outcomes, so maintaining high-quality data is essential. These challenges have led to the development of explainable AI (XAI) models, which can explain their reasoning, helping to address transparency concerns and meet regulatory requirements. Data quality management strategies are also required to ensure data is complete, accurate and unbiased for ethical AI implementation. On a global scale, collaboration among international organisations and governments is necessary to establish consistent and effective AI regulations. The paper concludes that the human element is invaluable in the age of AI. Human expertise is needed to program AI systems, address ethical considerations, and manage potential biases within the data. Furthermore, the workforce needs to be upskilled and reskilled to adapt to the changing landscape with AI. The paper recommends a ‘human-in-the-loop’ (HITL) approach, in which human judgment is combined with AI capabilities for responsible and efficient decision making. Collaboration and international harmonisation are key to ensuring responsible and ethical AI adoption in financial services.
Keywords: artificial intelligence (AI); regulation; financial services; data; ethics; human in the loop (HITL); future of work
Volume 3 Number 1
-
Editorial
Andreas Welsch, Chief AI Strategist, Intelligence Briefing and Editorial Board member, Journal of AI, Robotics & Workplace Automation -
Generative AI Papers
Enabling generative AI through use cases in a big enterprise
Enrique Mora and Luca Dell’Orletta, Nestlé
In the emerging field of generative artificial intelligence (GenAI), we possess the potential to significantly enhance our business operations and processes. Achieving this goal within a large corporation like Nestlé is challenging, however, given the immature stage of this technology. This paper outlines the approach to implementing GenAI at Nestlé, guided by the most influential use cases. It also underscores the importance of scaling people’s capabilities and establishing legal, ethical and compliance frameworks to support the deployment of this technology.
Keywords: AI; generative AI; GenAI; LLM; enterprise -
Building resilient SMEs: Harnessing large language models for cyber security in Australia
Ben Kereopa-Yorke, Telco
The escalating digitalisation of our lives and enterprises has led to a parallel growth in the complexity and frequency of cyberattacks. Small and medium-sized enterprises (SMEs), particularly in Australia, are experiencing increased vulnerability to cyber threats, posing a significant challenge to the nation’s cyber security landscape. Embracing transformative technologies such as artificial intelligence (AI), machine learning (ML) and large language models (LLMs) can potentially strengthen cyber security policies for Australian SMEs. Their practical application, advantages and limitations remain underexplored, however, with prior research mainly focusing on large corporations. This study aims to address this gap by providing a comprehensive understanding of the potential role of LLMs in enhancing cyber security policies for Australian SMEs. Employing a mixed-methods study design, this research includes a literature review, qualitative analysis of SME case studies and a quantitative assessment of LLM performance metrics in cyber security applications. The findings highlight the promising potential of LLMs across various performance criteria, including relevance, accuracy and applicability, although gaps remain in areas such as completeness and clarity. The study underlines the importance of integrating human expertise with LLM technology and refining model development to address these limitations. By proposing a robust conceptual framework guiding the effective adoption of LLMs, this research aims to contribute to a safer and more resilient cyber environment for Australian SMEs, enabling sustainable growth and competitiveness in the digital era.
Keywords: cyber security; artificial intelligence; AI; large language models; LLM; AI in cyber security; technological innovation -
Measuring the business value of generative AI
Jim Sterne, Target Marketing of Santa Barbara
Generative artificial intelligence (GenAI) can deliver tangible and intangible values that can be calculated to decide which projects benefit from GenAI and which do not. This paper is intended to be a guide for businesses just starting to build traction for their ideas. The focus is on evaluating and leveraging GenAI’s potential to innovate faster and compete effectively in a rapidly evolving digital economy. The paper specifies the many ways GenAI can have an impact on a business and considers how to measure that impact. It starts with standard business metrics (revenue, profit, customer satisfaction, etc.) and then turns to the more esoteric task of measuring the impact on creativity, inspiration and innovation, followed by business disruption and process metrics. It finishes with a look at improving process improvement.
Keywords: generative AI; business metrics; economic impact; innovation; digital transformation. -
Machine unlearning for generative AI
Yashaswini Viswanath, Resident Researcher, Business School of AI, et al
This paper introduces a new field of AI research called machine unlearning and examines the challenges and approaches to extend machine unlearning to generative AI (GenAI). Machine unlearning is a model-driven approach to make an existing artificial intelligence (AI) model unlearn a set of data from its learning. Machine unlearning is becoming important for businesses to comply with privacy laws such as General Data Protection Regulation (GDPR) customer’s right to be forgotten, to manage security and to remove bias that AI models learn from their training data, as it is expensive to retrain and deploy the models without the bias or security or privacy compromising data. This paper presents the state of the art in machine unlearning approaches such as exact unlearning, approximate unlearning, zero-shot learning (ZSL) and fast and efficient unlearning. The paper highlights the challenges in applying machine learning to GenAI which is built on a transformer architecture of neural networks and adds more opaqueness to how large language models (LLM) learn in pre-training, fine-turning, transfer learning to more languages and in inference. The paper elaborates on how models retain the learning in a neural network to guide the various machine unlearning approaches for GenAI that the authors hope can be built upon their work. The paper suggests possible futuristic directions of research to create transparency in LLM and particularly looks at hallucinations in LLMs when they are extended to do machine translation for new languages beyond their training with ZSL to shed light on how the model stores its learning of newer languages in its memory and how it draws upon it during inference in GenAI applications. Finally, the paper calls for collaborations for future research in machine unlearning for GenAI, particularly LLMs, to add transparency and inclusivity to language AI.
Keywords: machine unlearning; privacy; right to be forgotten; generative AI; fine-tuning; large language models; LLM; zero shot learning; explainability -
Business value of generative AI use cases
Fuad Hendricks, Accenture
This paper discusses the significant impact of artificial intelligence (AI), specifically generative AI (GenAI), on various industries and business processes. It highlights the rapid adoption of AI technologies and their profound influence on global business operations. The paper shares views on the substantial growth rate for the technology, with many businesses already in piloting phases or production stages. It emphasises the transformative role of generative AI in creating new services and products, necessitating changes in operating models, technology stacks and workforce skills. The paper also touches on various industries affected by AI, the potential for automation and augmentation and the strategic planning required for businesses to effectively implement and benefit from these technologies. Additionally, it discusses the importance of responsible AI, addressing risks such as bias and privacy, and complying with emerging regulations. The paper highlights the crucial role of AI in modernising business practices and creating competitive advantages in various sectors.
Keywords: Generative AI; data strategy; business transformation; efficiency; AI at scale; responsible AI; regulation -
Minimise model risk management oversight for cyber security solutions
Liming Brotcke, Ally Financial
Adoption of artificial intelligence (AI) and machine learning (ML)-powered cyber security tools and models by financial institutions has received considerable attention in the model risk management community. In parallel, developing trustworthy AI that is more explainable, fair, robust, private and transparent has also received considerable research and regulatory attention. Appropriate governing of cyber security models is inevitable. The prevailing thought at present is to have the model risk management function to oversee the development, implementation and use of such cyber security tools and models. This study first demonstrates two primary challenges of executing this oversight and then offers a few practical suggestions to ensure a reasonable application.
Keywords: cyber security; data science; model validation; machine learning -
Analysis of ChatGPT and the future of artificial intelligence: Its effect on teaching and learning
Gunja Kumari Sah, Dipak Kumar Gupta, Tribhuvan University, and Amar Prasad Yadav,Tribhuvan University and Rajarshi Janak University
This research paper aims to analyse the use of ChatGPT and its future in the teaching and learning process. The research was based on a descriptive research design and model that examined the use of ChatGPT among university students and leveraged the technology acceptance model (TAM). Samples were obtained from Nepal’s oldest universities, Tribhuvan University, Purwanchal University, Kathmandu University and Pokhara University, by purposive sampling techniques. Data was directly observed by field and Internet surveys at sample universities by structured questionnaires. The research included respondents based on their use of ChatGPT. Information related to ChatGPT was collected from 400 respondents from the science and management faculty. This application was only used by 280 responders, however; therefore, these 280 replies were examined to reach a decision. Data was entered into the SPSS and AMOS software for structural equation modelling. The measurement and structural model was found to be reliable and valid. According to the study, 54.3 per cent of users felt ChatGPT was crucial and 53.9 per cent of users believed it provided reliable information; however, 47.1 per cent of respondents said it is neither secure nor unsafe and 71.4 per cent said ChatGPT has an advantageous effect on users’ performance. ChatGPT is used by 42.9 per cent of users for help, mostly to find answers to questions. The environment and conduct of users of facilities are significantly influenced by performance expectations. The relationship between the performance expectations of teachers and students toward ChatGPT and their behaviour is also mediated by the facility circumstances.
Keywords: artificial intelligence; AI; generative pre-trained transformer; GPT; ChatGPT; learning; technology acceptance model; TAM; teaching -
Automation and AI in the workplace: The future of work is more complex than ever
Chelsea Perino, The Executive Centre
Artificial intelligence (AI) and automation are revolutionising the workplace, transforming the way businesses operate, how people interact and having an impact on the future of work. These technologies have the potential to enhance productivity, address societal challenges and contribute to economic growth. With machines becoming increasingly capable, they can now perform tasks previously done by humans, complement human work and even surpass human capabilities in certain areas. As a result, the nature of work is changing, with some occupations declining, others growing and many more undergoing significant transformations. This paper explores the promise and challenges of AI automation in the workplace, highlighting key workforce transitions and presenting several critical issues that need to be solved.
Keywords: AI; automation; future of work; coworking; real estate; technology; flexible workspace; hybrid work -
Practice Paper
What is in the black box: The ethical implications of algorithms and transparency in the age of the GDPR
Senna Mougdir, Dutch Data Protection Authority
Algorithms silently construct our lives. They can determine whether someone is hired or promoted, provide loans or housing, and decide which political advertisements and news articles consumers see. The General Data Protection Regulation (GDPR) which came into force on 25th May, 2018 regulates the protection of personal data of individuals within the European Union (EU) member states. While innovation is important to our society, it is also important that organisations using artificial intelligence technologies and big data comply with the GDPR to ensure our privacy and data is protected. The ethical implications and responsibility of the algorithm in these important decisions are, however, unclear. Algorithms are important participants in ethical decision making and influence the delegation of roles and responsibilities in these decisions. This paper focuses on the bias in algorithms and determines whether developers are responsible for the algorithms they use in the future, what these organisations are responsible for, and the normative basis of this responsibility. Finally, it will give recommendations that could be useful for organisations using algorithms in automated decision making.
Keywords: General Data Protection Regulation (GDPR); artificial intelligence (AI); algorithms; machine learning (ML); big data; ethics