Benefits, Requirements, and Unwanted Side Effects
In the current discussion about the use of AI in projects, more is being written about the dangers - such as job losses and significant changes in project management tasks—than about the benefits. Yet, the use of AI to handle routine tasks in projects promises a significant acceleration of processes, saving of resources and costs in project execution, and an improvement in the quality of results, both in terms of the project‘s product and the process, including process documentation. This is achieved by the ability of computers to analyze large amounts of data („big data“), allowing project management to spend minimal time correcting or supplementing information. As a result, the quality of information increases, and the error rate decreases with the use of AI in projects.AI is also objective, which can be an advantage in the analysis of stakeholders, risks, and decision options, especially when there are „cognitive biases“ within the project team. AI can recognize patterns across various projects or project data that might be overlooked by humans, which is particularly important for decision-making under uncertainty. In these situations, AI can help prepare information for critical decisions, making the work of the steering committee easier. Finally, the use of AI is also appealing to Generation Z, which seeks modern technology in their daily work and often chooses employers based on this criterion (Tiba 2021).
To realize the benefits of using AI for routine tasks in projects, a variety of prerequisites must be established. These include formulating a clear strategy and specific goals for the use of AI within the company, as well as establishing AI governance to ensure that AI is used in an ethically responsible and transparent manner.
Adherence to both internal and external compliance regulations must also be considered when zmplementing AI. This includes issues such as data protection and information security. Additionally, precautions must be taken regarding the quality of AI-generated results, as AI can sometimes „hallucinate,“ meaning it can produce incorrect or misleading outputs. It is advisable to subject AI results to a quality control check (Galgenmüller & Wagner, 2024). One often overlooked prerequisite for the use of AI is the availability of data. This may sound trivial, but it presents a real challenge for many companies, as information is often stored in various systems and formats. The lack of system integration can significantly hinder the use of AI and requires investment to resolve.
The qualifications of project participants are also a critical factor in successfully utilizing AI. This includes competencies in using digital tools and technologies, methods for data analysis („data analytics“), and the ability to assess the quality of AI outputs. Finally, the implementation of AI should be supported by change management, which focuses on addressing the concerns and fears of those affected, ensuring their involvement, and guiding them through the introduction process.
Nevertheless, the use of AI in a company can lead to unwanted side effects. For example, the introduction of AI may cause fear among some employees, leading to rejection or „passive resistance,“ where they no longer engage as fully as they did before or only perform tasks strictly by the book. Another side effect is that employees might fully rely on AI and stop critically questioning its outputs.
This is particularly problematic for project managers, who are ultimately responsible for the project’s results and must make their own assessments. Technical dependence on systems and data can also pose issues, especially in the case of a power outage or system crash. Before using AI, it is essential to ensure system integration, otherwise, crashes and errors may occur. Some applications (such as blockchain) require significant computing power, leading to high energy consumption, which raises questions about the sustainability of AI use. Who has access to which data? Are the data protected from unauthorized access, and how can the violation of third-party rights be prevented? Data protection and security are central risks associated with AI applications and should therefore be addressed as part of an AI governance framework.