Society and AI

Society and AI: An Exploration of Expectations and Decision Making

Jan W Veldsink, 2024

Listen to the (gen)podcast:


The emergence of artificial intelligence (AI) raises important questions about the role of this technology in our society. The sources highlight society's normative expectations of AI and the ways in which information is used to make decisions about AI.

Normative expectations



Society has high expectations of AI on the ethical, legal, cultural and professional levels.

Ethical: People want AI to be developed and deployed in an ethically responsible way, with respect for human rights and honesty. This means that ethical guidelines and standards are needed to ensure that AI systems do not discriminate and promote the well-being of people.

Legal: Clear legislation around AI is essential to ensure that AI development complies with existing laws and protects citizens' rights. This includes legislation in the field of data privacy, liability and transparency.

Cultural: Society is open to AI, but only if this technology improves the quality of life and respects cultural values. AI systems should not undermine our norms and values, but rather contribute to a better society.

Professional: Professionals in the AI sector must adhere to high ethical standards and prioritize the well-being of individuals and communities in their work. Establishing professional organizations that set standards for AI development and implementation can increase the credibility and reliability of AI technologies.

Decision-making


The company uses different types of information to make informed decisions about AI.

Descriptive information: Data on public opinion regarding AI technologies is crucial to understanding the concerns and aspirations of the community. Through surveys and studies, it can be determined to what extent society is ready for AI and which aspects require attention.

Predictive information: Predictive analyses are used to predict the potential impact of AI on different sectors. This enables policymakers and organizations to act proactively and address possible challenges and opportunities.

Prescribing information: Data-driven recommendations can help formulate effective strategies for responsible AI development and implementation. By sharing prescriptive insights between developers, policy makers and community leaders, AI initiatives can be better tailored to the needs of society.

Evaluating information: The evaluation of AI programs and initiatives is important to assess their impact on communities and to encourage continuous improvement. Evaluation frameworks help organizations to meass and account for the successes and social effects of AI initiatives.

Leading towards.



The sources show that society wants to play an active role in the development and implementation of AI. Expectations are high and there is a need for ethical guidelines, legal frameworks, cultural sensitivity and professional responsibility. Through a data-driven decision-making process, with attention to descriptive, predictive, prescriptive and evaluating information, one can strive for a responsible and socially desirable use of AI.

From senseless to meaningfull

From Bullshit Jobs to Generative AI: A Transformative Shift in the Workplace


Jan W Veldsink, 2024

Listen to the (gen)podcast here


In today’s fast-paced work environment, many employees find themselves embroiled in tasks that seem pointless and unfulfilling. These “bullshit jobs” can lead to frustration and disengagement, prompting a fundamental question: What if we could harness technology to transform our work experience? Enter Generative AI—an innovation that has the potential to redefine not only how we work but also how we connect with each other in the workplace.

The Unread Files



One of the significant issues plaguing workplaces today is the overwhelming amount of information that employees must sift through. Many documents, reports, and communications were never intended to be read by anyone. This “information overload” can lead to decreased productivity as employees spend time on materials that do not offer any real value.

Generative AI presents a solution to streamline this process. Imagine leveraging AI tools that can summarize lengthy reports, extract key insights, and present relevant information quickly and efficiently. This innovation allows employees to focus their energy on what truly matters, optimizing their contributions and enhancing overall organizational productivity.

The Unseen Writings



Another aspect to consider is the vast amount of content generated within organizations, ranging from emails to reports that often go unread. Much of this content serves little purpose and adds to the sense of frustration tied to unfulfilling jobs. The good news? Generative AI can generate relevant and engaging materials, ensuring communication becomes more meaningful. This shift transforms routine exchanges into opportunities for genuine connection and collaboration among team members.

The Power of Tools Like ChatGPT



As we adopt AI technologies like ChatGPT, we begin to realize that leveraging these tools is not just optional—it’s essential. They can significantly enhance productivity and relieve employees from monotonous tasks. However, it’s crucial to consider how this technological embrace impacts teamwork and collaboration.

The Loneliness Challenge



A recent article from the Harvard Business Review, titled "We're Still Lonely at Work," sheds light on a concerning issue: loneliness in the workplace. According to the article, one in five employees globally feels lonely at work. As organizations increasingly rely on automated processes, the risk of isolation among employees also rises.

This presents a challenge: how do we balance the efficiency brought about by AI with the need for genuine human interaction? While generative AI can alleviate mundane tasks, it’s vital for organizations to implement proactive measures to foster connections among employees. Engaging in regular check-ins, organizing team-building activities, and maintaining open channels of communication are essential strategies for mitigating loneliness.

A Path Forward



The transition from “bullshit jobs” to a workplace enriched by generative AI calls for intentionality and mindfulness. Embracing these technological advancements should go hand in hand with efforts to nurture human relationships within organizations. Let’s strive to create a work culture where productivity thrives and meaningful connections are prioritized.

In the evolving landscape of work, the challenge lies in utilizing AI tools while ensuring that team members feel valued and connected. Together, we can pave the way for a more fulfilling and engaging workplace, enhancing both productivity and interpersonal relationships.

Thank you for reading, and let’s continue to explore how we can create a better work environment for everyone!

The Pitfalls of Focusing Solely on Technology

Misinterpretation of AI Regulation

Jan W Veldsink MSc, Narrative Shaper in AI and Ethics. Date: 24 July 2024
Listen to the (gen)podcast here


In recent discussions surrounding the regulation of artificial intelligence (AI), a significant trend has emerged: the overemphasis on technology itself, often at the expense of understanding its implications and effects on society. As highlighted in an article from NRC, a survey conducted by the Dutch Data Protection Authority (AP) revealed a troubling reality among local governments. These entities are increasingly utilizing AI technologies, yet many find themselves grappling with their complexities and unintended consequences.

The Local Government Context

Local governments across the Netherlands are beginning to integrate AI into various routine processes, from facial recognition checks at passport applications to the automated issuance of parking fines based on license plate recognition. However, this trend raises critical questions about the role of supervision and the factors that regulators consider when developing regulations for AI systems.
The Misclassification of AI Technologies
One of the primary errors that supervisors make is mistakenly categorizing certain technologies as fully autonomous AI systems capable of learning and making decisions. For instance, when municipalities employ facial recognition technology for local passport applications, this technology is often misconstrued as an autonomous decision-making system. In reality, it functions similarly to a simple local check, akin to the facial recognition capabilities found on smartphones. The government can legally use biometric data from passports for this purpose, yet this does not involve the deep learning capacities commonly associated with AI.
Similarly, when municipalities apply AI for issuing parking fines, the technology in use often involves limited image recognition capabilities. The AI that scans license plates typically operates under strict regulations, where if a vehicle is not in compliance, a fine is automatically issued. This system does not encapsulate the broader issues of AI ethics and accountability; rather, it relies on rigid rules that dictate outcomes without room for nuanced, intelligent decision-making.
The Dangers of Autonomy in AI
The potential for harm becomes significant when we consider scenarios where AI systems are deployed to assess welfare benefits applications. If such systems were to incorporate facial recognition as part of their evaluation process, this could lead to severe ethical issues, including biased decision-making and a lack of transparency. Here we encounter the core argument: we should be focusing not solely on the technology itself but on the effects it has on individuals and communities.
The Need for a Shift in Focus
To effectively regulate AI, it is essential to shift the conversation from technology-focused discussions to examining the societal effects of these tools. A framework that prioritizes Fairness, Accountability, Transparency, and Ethics (FATE) serves as a foundational philosophy toward responsible AI deployment. By evaluating AI systems through the lens of FATE, regulators can better understand the implications of data use while ensuring that the interests of those affected are safeguarded.

Data as a Resource with Ethical Implications

In the realm of AI, data serves as the raw material upon which these technologies depend. As such, it is crucial to analyze how the data used in AI systems is sourced, handled, and maintained. Are ethical considerations embedded in the process? Are there measures in place to ensure fairness and accountability in the data that informs algorithmic decisions?
By applying FATE principles to the data utilized for AI, we can cultivate a more nuanced understanding of its impact. This approach enables regulators to not only hold developers accountable but also ensure that ethical considerations are woven into the fabric of AI technology from the outset.

Conclusion

The regulation of AI technologies requires a fundamental rethinking of priorities. Supervisors must move beyond a narrow focus on technology as the core aspect of regulation and instead center discussions on the effects of AI on society. By embracing an ethical framework and scrutinizing data practices, we can create a regulatory environment that promotes responsible AI use—one that protects individuals while fostering technological innovation. The task is not just about managing technology; it is about ensuring that advancements in AI contribute positively to society as a whole.


References

  1. NRC, 19 juli 2024, https://www.nrc.nl/nieuws/2024/07/18/regelgevers-en- toezichthouders-proberen-achter-de-innoverende-ai-bedrijven-aan-te-racen- a4860120

Associative memory as driver for innovation

Understanding Large Language Models: Thoughts on Their Creation, Functionality, and the Rise of Smaller Models
bb688d0c-74b6-489d-bf5f-50d8841f1662
Large language models (LLMs) have primarily been created using brute force approaches, leveraging vast amounts of data and computational power. Their emergent capabilities have surprised many within the field, prompting organizations such as OpenAI to release their models publicly at an early stage, responding to the unexpected effectiveness of these systems.
Despite their success, our understanding of LLMs' internal workings remains incomplete. I posit that these models operate analogously to the associative memory processes inherent in the human brain. Just as our brains form connections based on experiences and knowledge—where one idea or concept can evoke another—neural networks, through their architecture and training on vast datasets, mimic this associative thinking. The underlying mechanisms of LLMs enable them to draw inferences and generate text based on patterns recognized during training, akin to how humans recall related memories or concepts when prompted with a stimulus.
This analogy not only illustrates the potential for processing language but also highlights gaps in our comprehension of how these models function. Accepting that LLMs mirror some aspects of human associative memory opens numerous avenues for exploration. It suggests that there is significant potential for improvement—this includes optimizing larger models and, importantly, developing smaller, more efficient architectures that can achieve similar degrees of performance.

In recent trends supporting this shift, OpenAI has released GPT-4o mini, a smaller variant of GPT-4o. This model has made headlines as the best small model available, even surpassing competitive models like Anthropic's Claude 3 Haiku and Google's Gemini Flash on some benchmarks. Remarkably, this efficiency comes at a fraction of the cost, priced at just 15 cents per million input tokens and 60 cents per million output tokens, demonstrating a powerful trend towards creating AI models that are not only smaller but also faster and cheaper.

The movement towards smaller models reflects the reality that early AI systems were underoptimized, leading to excessively high operational costs. As noted, GPT-4o mini is more than 60% cheaper than its predecessors while delivering significantly improved performance. This rapid evolution reaffirms that the AI industry can produce models that, once considered exorbitantly priced, could have been optimized much earlier.

The associative memory aspect of these models may play a crucial role in their efficiency. By drawing on a vast web of relationships between data points, LLMs can respond more swiftly and accurately, reflecting the way human memory retrieves relevant information based on associative links. This capability allows smaller models to take advantage of increased efficiency, focusing on the most relevant connections rather than necessitating the sheer computational weight of their larger counterparts.

As we consider these advancements, we must acknowledge a changing landscape wherein a few key players—often referred to as FAANG (Facebook, Apple, Amazon, Nvidia, Google) plus Microsoft—control not only the most powerful models but also the essential distribution channels through which these technologies reach consumers. This reality highlights a critical aspect of today's AI market: it is not a democratization of technology that is emerging, but rather a consolidation of power among a select few companies.

OpenAI, the primary developer of models like GPT-4o, operates within this ecosystem, standing alongside competitors such as Google and Anthropic, who similarly control both large and smaller AI models. Their dominance raises questions about the future of competition in the AI field. While many hoped for a shift that would allow smaller companies to unseat these giants, the reality is that these established entities have proven resilient in maintaining their positions.

The mechanisms through which AI is now integrated into everyday technology further cement this trend. Major companies like Microsoft and Google are capitalizing on their existing distribution channels, seamlessly incorporating AI models into widely used services like Office and Workspace. As such, consumers are unlikely to turn to alternative, smaller providers for AI solutions; instead, they will engage with the technology offered by the leading players, reinforcing the oligopolistic structure of the market.

I conclude that LLMs signify a remarkable convergence of technological advancement and market dynamics. While there remains a path for innovation and optimization— which includes the development of powerful yet cost-efficient smaller models like GPT-4o mini—the surrounding industry landscape is increasingly dominated by large corporations. The real challenge moving forward will be navigating these complexities, ensuring that advancements in AI technology reflect the associative processes of human cognition, and ultimately providing benefits to a broader array of stakeholders, rather than perpetuating existing hierarchies.

Embracing Vulnerability and Softness

Embracing Vulnerability and Softness in the Era of Gen/AI Developments with a Socio- Ethical and Socio-Technical Lens
My unwavering commitment to embracing vulnerability and softness in modern organizational life extends beyond personal growth and innovation to encompass a broader socio-ethical and socio-technical perspective on Gen/AI developments. In navigating the complex landscape of advancing technologies and evolving social structures, I hold firm to the belief that integrating principles of authenticity, empathy, and inclusivity is fundamental in shaping the responsible and human-centric use of AI technologies.
In the realm of Gen/AI developments, where algorithms and intelligent systems blur the lines between human cognition and machine intelligence, a socio-ethical viewpoint underscores the importance of considering the societal implications and ethical considerations of AI applications. By infusing AI systems with the values of vulnerability, transparency, and ethical decision-making, we can create technologies that not only enhance efficiency but also prioritize human well-being, fairness, and social justice.
From a socio-technical perspective, the integration of vulnerability and softness in AI design and implementation acknowledges the intricate interplay between technology and society. Recognizing that technology is embedded within social contexts, I advocate for a holistic approach that considers the diverse needs, values, and perspectives of individuals and communities impacted by AI innovations. By fostering collaborations between technologists, ethicists, policymakers, and community stakeholders, we can co- create AI solutions that address complex sociotechnical challenges and foster trust and transparency in AI deployment.
The philosophy of "vanuit kracht kies ik voor kwetsbaarheid en zachtheid" guides my actions in engaging with Gen/AI developments through a socio-ethical and socio- technical lens. By engaging in critical dialogues, advocating for responsible AI practices, and promoting diversity and equity in AI development, I strive to contribute to a more inclusive and human-centered technological ecosystem that empowers individuals and communities while upholding ethical standards and societal values.
In embracing vulnerability and softness within the realm of Gen/AI developments, I invite you to join me in exploring the intersection of technology, ethics, and society, and charting a path towards a future where technology serves as a force for positive social change and collective well-being.
Visit www.grio.nl to delve deeper into my perspectives on vulnerability, softness, and the socio-ethical dimensions of AI technologies in modern organizational life.
December 2024
July 2024
April 2024
January 2024
April 2023
January 2020