Sketchin
img patternimg pattern

AI to serve business innovation: the experts opinion

We have gathered the thoughts of experts who spoke at our events "AI Enabled Experiences - How AI Transforms Business," where we provided a stage to explore the potential of Artificial Intelligence (AI) and agentive technologies in corporate innovation.

Sketchin

29.02.2024 - 11 min reading
a composition of photos from our past events on AI enabled experiences

In recent months, at Sketchin, we have captured the attention of the business and technological world by organizing a series of events, "AI Enabled Experiences - How AI Transforms Business" at our locations in Milan and Rome. Our Luca Mascaro started the discussion, laying the groundwork for an in-depth dialogue on AI's transformative impact across various sectors.

Luca Mascaro shares the studio's point of view in a webinar, to collect impressions and comments from designers and companies. (Italian language)

The meetings attracted an audience of clients and companies eager to understand how AI can enhance their offerings, streamline organizations, and accelerate innovation processes. At the heart of the discussions was AI's ability to tackle previously insurmountable challenges, such as proactivity, extreme personalization, and managing the vast amount of information that affects business operations.

At the end of the meetings, experts from various fields shared their visions on AI, offering a comprehensive overview of its applications, current limitations, and future prospects. We collected their contributions, outlining a framework of opportunities and challenges AI presents for business and society.

Generative AI and the future of innovation

Andrea Taglioni, Global Data & AI Competence Manager at xTech in the Bip Group

Generative AIs represent a revolution regarding Information Retrieval and Summarisation, but the reasoning is the true frontier yet to be explored. These algorithms seem to reason, but they rely on a statistical approach. The next step is developing models that truly understand and critically evaluate information.

The AI of LLMs (Large Language Models) is more powerful than its creators imagined. Its polyglot nature was not anticipated. Today, Generative AIs support use cases in production, especially in the field of Information Retrieval and Summarisation from exogenous sources (the web) and also endogenous sources (corporate knowledge bases) in a "Search and converse" mode with documents and websites. They also support the creative side of creating images, videos, programming code, and unpublished texts.

The functionalities of generating and comparing documents are still at an intermediate stage of development. Work is needed so that Generative AIs understand the document structure (its sections) and their overall relevance to achieve a meaningful synthesis of comparisons (for example, comparing a resume with a job description or tenders with their responses). Currently, the machine can express a general comparison judgment, but it still cannot assess the specific importance of the different sections for a particular use case.

The field still in development is the ability to reason. Although they seem to reason, these algorithms primarily use a statistical approach to generate responses. Their ability to process information from broad sources creates the feeling of reasoning. Still, the true frontiers of reasoning are only beginning to open through the first AI Reasoning experiments. In these experiments, appropriately trained Large Language Models interact to syndicate, control, or stimulate each other, marking the first steps towards initiating reasoning processes.

The latest versions of the most well-known LLMS indicate that multimodality is becoming increasingly accepted. Generative intelligence can start from any expressive form and generate another expressive form in response to specific requests (text/audio/image/video transformed into text/audio/image/video).

This progress must be guided by ethical and sustainable principles. Without proper guidance, innovation could become distorted and dangerous. On the contrary, with careful guidance, it can become extremely effective and positive. Europe's proactive attitude in managing the risks associated with AI, and accelerating the regulatory process following the advent of GPT highlights the importance of the topic. An approach that unites both opponents and proponents, indicating the urgency of defining a framework that is simultaneously "open to innovation" and "protective against distortions."

AI in Law

Giuseppe Vaciago, lawyer, partner at 42 Law Firm

Despite technological advancement, AI cannot yet tackle complex tasks that require a deep understanding of law. Working on a regulatory framework that can accompany technological development is crucial, ensuring safety and ethics in using AI.

The legal dimension limits of artificial intelligence (AI) technologies are extremely relevant and vary greatly across different jurisdictions. Currently, in Europe, the Artificial Intelligence Regulation (AI ACT) will soon come into effect, introducing significant restrictions to mitigate the risks associated with the use of AI. This regulation classifies AI applications based on the risk they present, with stricter requirements and controls for AI considered high risk.

Its implementation is essential to ensure compliance and mitigate potential negative impacts on ethics and fundamental rights.

A tangible example of legal limits is represented by the need, in Europe, to conduct legal, ethical, and social impact assessments for high-risk AI applications. This implies a thorough analysis of the effects that AI might have on aspects such as privacy, non-discrimination, and security.

The previous experience with GDPR, the personal data protection regulation, has taught that the main goal will be to achieve full application of the established rules. The expectation for the creation of a dedicated office in Brussels for the supervision and enforcement of the artificial intelligence regulation suggests a growing commitment towards effective regulation. However, only the passage of the next few months will indicate whether this strategy will be successful and translate into consistent and reliable application of AI regulations.

Outside Europe, legal limitations can vary considerably. In China, for example, regulation seems more oriented towards strategic planning, with less emphasis on detailed compliance. In the United States, the Executive Order issued by President Biden demonstrates a growing awareness of the risks associated with AI, but regulation is still evolving.

The legal, technological, and design limits in the AI field require a holistic vision and interdisciplinary collaboration to develop solutions that balance innovation with safety and responsibility.

Ethics and Social Impact of AI

Fabrizio Rauso, Strategic Advisor

AI has the power to transform our society, but we must be aware of the associated risks. It is essential to promote responsible innovation that considers ethical and social implications to ensure that AI is a positive force for humanity.

Artificial intelligence has emerged as a transformative force, revolutionizing sectors and shaping our daily lives. Its potential to automate tasks, solve complex problems, and generate new ideas has sparked a wave of innovation and disrupted traditional business models. However, as AI becomes increasingly ubiquitous, it is crucial to critically assess its impact and ensure its responsible and ethical use.

While the transformative power of AI is undeniable, it is also a source of concern. Its capacity to automate tasks raises fears of unemployment and social disruption. Furthermore, AI's intrinsic biases, if not carefully addressed, could exacerbate social inequalities and perpetuate discrimination.

To leverage the benefits of AI while mitigating its risks, prioritizing ethical considerations is essential. The development and deployment of AI must be guided by principles of fairness, transparency, accountability, and non-discrimination.

  • Fairness: AI systems must not perpetuate or exacerbate existing biases in society. The data used to train AI models must be representative and unbiased, and algorithms should be examined for potential biases.
  • Transparency: AI systems must be transparent in their operation, allowing users to understand how decisions are made and why. This transparency facilitates accountability and enables users to identify and address potential biases.
  • Accountability: Clear frameworks of ownership and accountability for AI systems should be established. Developers, users, and governing bodies must be held responsible for the ethical and responsible use of AI.
  • Non-discrimination: AI systems must be designed and implemented in a way that does not discriminate against individuals or groups based on their race, gender, ethnicity, sexual orientation, or other protected characteristics.

AI systems must be designed to be accessible to people with disabilities, ensuring that everyone can benefit from their capabilities. This includes incorporating assistive technologies, providing alternative input methods, and designing inclusive user experiences, involving individuals from diverse backgrounds and expertise. This diversity fosters innovation, reduces the risk of biases, and ensures that AI solutions reflect the needs of a wide range of users.

AI has immense potential to improve our lives and solve global challenges. However, its power must be exercised responsibly and ethically. By prioritizing fairness, transparency, accountability, non-discrimination, accessibility, and diversity, we can ensure that AI is a force for good, not harm. As AI continues to evolve, we must remain vigilant in addressing its ethical implications, ensuring it is used to empower and enrich humanity, not divide and diminish it.

The Democratization and Challenges of Generative AI

Stefano Di Persio, CEO at HPA

Considering that over 50% of Artificial Intelligence projects do not advance beyond the Proof of Concept (PoC) stage, it is necessary to support companies in correctly assessing the impacts of adopting AI tools in the company. While as users we can afford to "play" with ChatGPT sometimes without much attention to the quality of responses, their reliability, and disregarding the origin of the texts used for training and how the data we provide will be used, a company manager must be able to integrate AI into complex and long-established processes.

After the Internet and smartphones, we are certainly at the beginning of a new technological era. In the last five years, Generative Adversarial Networks (GANs) and Transformer models have made the birth of so-called Generative AI and its mass dissemination possible. We have moved from an AI used only by IT companies in the construction of software and applications for the industry, to the democratization of AI with an ever-increasing availability of tools that lead to the production of synthetic content nearly indistinguishable from real ones and make possible the creation of new Generative AI applications by the end user.

A loop that is already leading to new forms of art and new professions at the intersection between technology and creativity in addition to the birth of a multitude of startups and new businesses. The impacts are already under our eyes within hundreds of applications we use daily but the speed of release of new tools and the media hype have also generated a background noise that disorients the market.

As the head of a reality that has been developing AI solutions for over 6 years, I notice from companies a greater sensitivity to the topic of Artificial Intelligence but also the need to discern - beyond the informational chaos - the real and concrete business opportunities. For companies that must necessarily look at technological investments in a medium-long term horizon, it is important to understand not only what is the state of the art of the technology but also the levels of reliability, transparency, security, obsolescence, and organizational impacts.

As individuals and as a society, we must be aware that AI already surpasses any human being in specific areas and that within a few years there will be systems that will surpass humans not only in the field of logic but also of knowledge and communication. Making exact predictions is impossible, due to the rapidity of technological evolution and because - even if the time to market is increasingly reduced - what is available today on the market is only the tip of the iceberg of the work that is carried out daily in research centers.

Echoing the definition of Mo Gawdat, former head of Google [X], AI today is like an alien child landed on Earth. It has superpowers but is not yet fully aware of them. It will be the teachings of humans that transform it into a superhero capable of amplifying our abilities to face the great challenges of humanity, from reducing inequalities to protecting the environment, or into a super villain that will accelerate the process of self-destruction.

Our meetings offered a deep and multidisciplinary vision of AI, showing how this technology is at the center of a lively and complex debate concerning the future of business, legality, and society as a whole. In the coming months, we will organize new appointments to show Proof of Concept of UX hand in hand with agentive technologies and adaptive interfaces.

Contact us

Our events offered an in-depth and multidisciplinary view of AI, demonstrating how this technology is at the centre of a lively and complex debate concerning the future of business, law and society as a whole. Contact us if you would like to open a conversation, share an idea or collaborate with us on a new project of your own.