About Cybernetics

I think in the developing era of AI it needs a broader view. We need to become Systems Thinkers to keep up with the newest developments. We need to opt our educational system and rethink the needed competences.

Cybernetics is an interdisciplinary field of study that focuses on the theory, design, and application of communication, control, and feedback mechanisms in both natural and artificial systems. It was first introduced by Norbert Wiener in his 1948 book "Cybernetics: or Control and Communication in the Animal and the Machine."

The core concepts of cybernetics include feedback loops, information theory, communication, control systems, and self-organization. These ideas are applied across various fields such as engineering, biology, psychology, sociology, economics, and computer science. Cybernetics aims to understand how complex systems maintain stability or adapt over time by analyzing the flow of information within these systems.

Some key areas in cybernetics include:

1. Control Systems: The study of feedback mechanisms that help regulate a system's behavior towards achieving a desired outcome, such as thermostats and autopilots.
2. Communication Theory: Understanding how information is encoded, transmitted, and decoded within systems, which forms the basis for modern communication technologies like telephones, radios, and computers.
3. Artificial Intelligence (AI) and Robotics: Applying cybernetic principles to create intelligent machines capable of learning, adapting, and making decisions based on their environment or input data.
4. Biological Systems Analysis: Studying how living organisms maintain stability and adapt through feedback mechanisms in physiological systems like the human body's homeostasis regulation.
5. Social Cybernetics: Examining social systems, organizations, and societies using cybernetic principles to understand their structure, communication patterns, decision-making processes, and self-regulation capabilities.

Cybernetics has had a significant impact on the development of modern technology and our understanding of complex systems. It has contributed to advancements in fields such as artificial intelligence, robotics, computer science, engineering, and even influenced social theories like systems thinking and complexity theory.


Tale of two organizations

The Paradox of Complexity: A Tale of Machine Learning in Tech Giants

In the bustling metropolis of Silicon Valley, nestled among the giants of technology, there existed a world of relentless innovation and fierce competition. In this world, the biggest players – Facebook, Google, Amazon, and others – were engaged in a constant battle to outdo each other in the realm of artificial intelligence and machine learning.

At the heart of this quest was a fundamental dilemma: the allure of building custom, intricate machine learning models versus the pragmatic need for standardization and simplicity. This story unfolds in two contrasting organizations, each representing a different approach to this quandary.

The Labyrinth of Complexity

In one corner stood Organization A, a composite of tech giants renowned for their custom-built AI solutions. They prided themselves on their ability to craft sophisticated, bespoke models tailored to each nuanced problem. Their corridors buzzed with the talk of the latest algorithms and cutting-edge techniques.

However, this pursuit of complexity came at a cost. The more intricate the models became, the heavier the burden they carried in terms of technical debt. Each custom solution was a masterpiece, but together, they formed an intricate labyrinth that few could navigate. The lack of standardization led to a chaotic environment where traceability, reproducibility, and transparency were often sacrificed.

The Perils of Innovation

Organization A's obsession with custom models began to show cracks. Projects that initially promised revolutionary outcomes stumbled under their own complexity. The pursuit of the perfect algorithm often led to overlooking the broader picture, resulting in solutions that were brilliant in theory but faltered in practice.

In the highly regulated realms of finance, healthcare, and public services, the absence of clear, reproducible methods began to raise concerns. The inability to trace decisions made by these AI systems became a significant liability, leading to mistrust and skepticism among stakeholders.

The Alternative Path

Meanwhile, Organization B, representing the other side of the tech giants, approached the problem differently. They understood the allure of custom models but recognized the pitfalls of excessive complexity. Their philosophy was grounded in finding a balance between innovation and pragmatism.

Organization B embarked on a journey to experiment with a variety of tools and methods. Their goal was not to build the most intricate models but to find algorithms and workflows that could be standardized, ensuring minimal technical debt. This approach fostered an environment where innovation was encouraged, but not at the expense of clarity and manageability.

The Rise of Standardization

As time passed, the merits of Organization B's approach became evident. Their AI solutions, while not always as bespoke as those of Organization A, were robust, traceable, and reproducible. They could easily adapt to regulatory changes and were more transparent in their decision-making processes.

The tech community began to take note. The narrative shifted from glorifying complexity to valuing efficiency and reliability. Organization B's AI systems were not just tools for the present; they were sustainable solutions for the future.

The Lesson Learned

The tale of these two organizations served as a parable in the world of AI and machine learning. It highlighted the crucial balance between innovation and practicality. While the allure of building custom, complex models was undeniable, the long-term success in the tech world required a thoughtful approach to standardization and simplicity.

In the end, the giants of Silicon Valley learned that in the intricate dance of technology, sometimes the most powerful step is the one taken with caution and foresight.

GPT' and Conformity

How to Foster Critical Thinking in a Group in the era of GPT’s: Lessons from Asch's Conformity Experiments

Have you ever wondered how much your opinions are influenced by the majority in a group? Do you think you would stick to your own vision even if everyone else disagreed with you? Or would you conform to the group pressure and give up on your critical thinking?

These are some of the questions that Solomon Asch, an American psychologist, tried to answer in his famous conformity experiments in the 1950s. He wanted to see how people would react when faced with a simple visual task that had an obvious correct answer, but also a group of confederates who gave a wrong answer unanimously.

The results were surprising and disturbing. Asch found that about one-third of the participants conformed to the group at least once, even though they knew the correct answer. Some of them did it to avoid being ridiculed or rejected by the group, while others doubted their own perception and judgment. Only a few remained independent and confident in their responses.

What does this mean for us today? How can we foster critical thinking in a group setting, especially when we have to deal with complex and uncertain situations? Here are some suggestions based on Asch's experiments and other research:

- Encourage diversity of opinions and perspectives. Having a variety of viewpoints can help us challenge our assumptions, consider different alternatives, and avoid groupthink. Diversity can also reduce the pressure to conform, as people are more likely to express their dissenting opinions when they see others doing the same².
- Create a safe and supportive environment. People are more likely to share their honest thoughts and feelings when they feel respected, valued, and accepted by the group. A safe environment also allows people to admit their mistakes, ask for help, and learn from feedback. To create such an environment, we need to foster trust, empathy, and openness among group members³.
- Promote constructive dialogue and debate. Rather than seeking consensus or agreement, we should aim for understanding and learning from each other. Dialogue and debate can help us clarify our assumptions, test our arguments, and refine our ideas. To do this effectively, we need to listen actively, ask questions, challenge respectfully, and acknowledge different perspectives⁴.

Imagine this scenario: You have a meeting with 10 participants. Nine of them did not have the time to prepare the meeting and ask ChatGPT for input. The tenth participant developed an own vision/opinion through critical thinking. What would happen if you followed these suggestions?

- You would be more likely to hear the tenth participant's opinion, as he/she would feel more comfortable to share it with a diverse and supportive group.
- You would be more likely to consider the tenth participant's opinion, as he/she would present it with evidence and logic, and invite feedback and questions from the group.
- You would be more likely to learn from the tenth participant's opinion, as he/she would engage in a constructive dialogue and debate with the group, and acknowledge the strengths and weaknesses of his/her position.

As you can see, following these suggestions can help you foster critical thinking in a group setting, and avoid the pitfalls of conformity. Critical thinking is not only beneficial for individuals, but also for groups and organizations. It can help us solve problems creatively, make better decisions, and achieve our goals.

So next time you find yourself in a group situation where you have to express your opinion or make a choice, remember Asch's experiments and ask yourself: Am I conforming or thinking critically?

'AI en Ethiek'

Chairman of the day Ronald Jeurissen opened the afternoon with the question: “How can Artificial Intelligence respect and promote people's freedom? The technology is still new and nobody knows what the future will look like." Still, he is hopeful: "Now is the chance to shape AI and ethics yourself."

Simone van der Burg, senior researcher at Wageningen University & Research, presents the dilemmas of smart farming. “Smart farming consists of technical means that help farmers to better understand their business. Think of sensors that measure the soil composition or the production of a dairy cow. But who owns that data? Is that the farmer's trade secret? Or from ICT companies? Or should society be able to control farmers so that they produce more, responsible and safe food?” Van der Burg sees those involved struggling with this question and continues her research into the ethical aspects of smart farming.

AI and Ethics Seminar - Technical and Ethical Considerations of Artificial Intelligence
Big Data or Big Brother?

“Facebook defines who we are, Amazon knows what we want and Google knows what we think,” says Marcel Becking, philosopher at Radboud University Nijmegen. “Big data knows better than we do what we want. But if the technology is used like Big Brother, then our autonomy is at stake. Power is an important element. Silicon Valley companies know a lot. But no data about your health, education and banking. What happens if you text or email your doctor? That is why it is important that politics also plays an important role. GDPR is the first step.”
building dreams

Annelies van den Brink and Jan Marsman of Hitachi ask the question: What are people doing with AI worldwide and what can the Netherlands add to it? According to Forbes, Russia is investing in war technology and the US is investing in talent. Estonia is a forerunner in legal issues about AI. They also have a relatively large number of start-ups. And the Netherlands? Van den Brink explains: “Invest in collaboration and technology. In the Golden Age we built the best ships, we can do that again now. Build a dream.”
AI and robots: curse or blessing?

Guszti Eiben of VU Amsterdam says: “Machine learning is hot. AI must satisfy four things: thinking and acting as a human being and thinking and acting rationally. Intelligence needs a body, mind, hardware and software. I expect the development to go fast.” If robots can think for themselves and develop new robots themselves, the following question will arise: “Who should be protected in the future? Do robots have rights?” One thing is clear: living and working with AI will never be the same again.

Seminar AI and Ethics
Cyber ​​crime and AI

Jan Veldink is the last speaker of the seminar. He works at Rabobank and is a teacher at Nyenrode. “Banks face major challenges to protect their customers. Fighting fraud has to be done quickly. It is illogical for someone to withdraw money within an hour in both the Netherlands and Indonesia. In addition, machine learning machines must be 100% correct. And you have to be prepared for new attacks all the time.” Technology will certainly change the future. Veldsink: “We want it to be safe and add value.”

Participant Marko Kiers will start as a manager at Oracle next month. He says: “I found Eiben inspiring. He uses fun TV series like Westwood in his presentation. I also found smart farming very interesting. What will happen if farmers can live off their data sales? That provides a greater return for the early adopters.” Joke Ederveen follows the module Market, Law & Ethics at Nyenrode. “I thought it was very topical. It has become clear to me that knowledge should be available to everyone. In my role as a business consultant, I want to put AI more on the agenda.”
Modular Executive MBA in Business & IT

The topics in this seminar are covered during different modules of the Modular Executive MBA in Business & IT. Learn how to bridge the gap between IT and Business as a manager or director.
April 2024
January 2024
April 2023
January 2020