Friday, April 10, 2026

The Future of Work in the Age of Automation: Proceedings of a Workshop on Norbert Wiener’s 21st Century Legacy

"If you're not thinking about AI, you're not thinking." ~ Chris Meyer

Norbert Wiener (1894–1964), has been famously cited as the mathematician who founded the field of cybernetics with the publication in 1948 of his seminal book Cybernetics: Or Control and Communication in the Animal and the Machine. His work was the direct intellectual precursor to modern AI and automation. 


Wiener was hardly optimistic about the effects of AI on the future of work. He repeatedly warned that unchecked automation would cause massive unemployment, treat machines as the “precise economic equivalent of slave labor,” and force human workers to accept slave-like economic conditions.


In 2023, an IEEE Workshop on Norbert Wiener’s legacy examined how AI and automation are reshaping the future of work. Drawing on Wiener’s warnings about job displacement and the ethical duties of technologists, participants critiqued overly optimistic “technological determinism” narratives that ignore social costs. 


Discussions highlighted qualitative losses — reduced meaning, creativity, and human connection in work — alongside risks of growing inequality and environmental harm. The workshop called for interdisciplinary collaboration, stronger governance, and a shift toward human-centered values like dignity and flourishing, rather than pure efficiency or profit, to ensure technology serves society rather than disrupts it.


One of the paricipants of this workshop was Pedro H. Albuquerque (Senior Member, IEEE.)  Born in Brazil he obtained an Electrical Engineering degree from the University of Brasilia, and a Ph.D. in economics from the University of Wisconsin-Madison. Our paths crossed while he was teaching economics at UMD here in Duluth approximately two decades ago and became part of the philosophy club we hosted in our home.

             

He is now a Research Fellow with the Aix-Marseille School of Economics in France and a Cofounder of ACCELERATION & ADAPTATION. He has published articles in a large number of scientific journals and has presented at leading international conferences, in fields as varied as occupational science, engineering, economics, finance, and social studies. His areas of interest are technology studies, economics, occupational science, sustainability, and finance. 


Over the years Albuquerque has been a gracious resource, and the increasing adoption of AI I've been leaning in to hear his cautionary thoughts on this topic. What follows are a batch of short answers to long questions pertaining to the workshop on Norbert Wiener of which he was a member/participant.


EN: Norbert Wiener emphasized the moral duty to anticipate risks and societal impacts. How do you balance that responsibility with the risk of overcorrecting—where caution itself might slow or block beneficial innovation?


Pedro Albuquerque: Generally the opposite risk doesn't exist, when it's mentioned it's normally a political narrative in favor of some status quo. The real challenge is not having enough restraint (example, development of nuclear weapons).


EN: Some argue that technology threatens “meaning” in work and life. But isn’t meaning ultimately an individual responsibility? To what extent should technology be held accountable for something so personal?


PA: The effects of technological innovations on our lives are hardly under our control (example: parents are unable to avoid the consequences of Internet misuse on their children, no matter how much they try hard).


EN: There’s concern about “loss of human engagement” in an AI-driven world. How much of that is driven by technology itself, and how much is the result of individual choice and personality differences?


PA: It arguably affects all, some more than others though.


EN: When we talk about “fairness” in the age of AI, what does that actually mean? If technological progress raises overall prosperity, is it inherently problematic if some benefit more than others?


PA: It doesn't necessarily raise overall prosperity, technology is neutral on that. Prosperity in its use is a political choice, not a technological matter.


EN: The discussion often emphasizes “justice” and “equity” in AI outcomes. How do you define those terms in practical ways, especially compared to clear historical injustices like redlining?


PA: Redlining is a good example, technology may be politically chosen to be the instrument of oppression.


EN: We currently see large numbers of unfilled jobs in certain sectors. Could AI and automation actually solve labor shortages rather than displace workers—and how should we think about that distinction?


PA: Again, the outcomes are not driven by technology, but by political choices.


EN: Many discussions focus on what could go wrong with AI. How do you weigh those risks against the possibility that the opposite—positive, transformative outcomes—may be just as likely?


PA: As humans are risk averse, normally risks and damages are the central concern.


EN: There’s a concern that efficiency may come at the expense of artistry or human connection. But in areas like housing or healthcare, speed and scale can meet urgent needs. How should we balance efficiency with human-centered values?


PA: Efficiency can be evaluated both as quality and quantity. Modern societies have been lobotomized by a "quantity over quality" productivist ideology, where what can't be measured or financially evaluated is thrown under the rug.


EN: AI’s energy use is often criticized. How should we evaluate that concern in the broader context of energy innovation—such as nuclear or other emerging solutions?


PA: It's a whole other Pandora box. Let's just say for now that this new technology will put increasing and extreme pressure on a system that's yet unsustainable and under great danger of collapse.


EN: When people talk about “ensuring worker wellbeing” in an AI-driven economy, what does that mean in concrete terms? What should actually be measured or protected?


PA: Another Pandora box, we'd need to ignore economics, which isn't very helpful for these matters, and let the public health and occupational scientists speak. But in the current political and economic systems their voices remain mostly unheard.


EN: AI outcomes are often described as unpredictable. But uncertainty has accompanied every major technological shift. At what point does uncertainty become a reason for caution versus a normal condition of progress?


PA: History tells us we've always done less than optimal in prevention.


EN: Efforts to design “ethical AI” often focus on minimizing inequality. How do you balance that goal with the reality that individuals make different choices and define success in different ways?


PA: Public policies are known to successfully address these matters.

* * * * *

Download The Future of Work in the Age of Automation: Proceedings of a Workshop on Norbert Wiener’s   https://drive.google.com/drive/search?q=Love%20et%20al 

What Are Your Thoughts on These Things?
Please leave a message in the comments.

Related Links
A Visit with Futurist Calum Chace on his new book The Economic Singularity
https://pioneerproductions.blogspot.com/2016/06/a-visit-with-futurist-calum-chace-on.html

Surviving AI by Calum Chace Is a Must Read for Those Who Plan to Be Here in the Future

https://pioneerproductions.blogspot.com/2016/06/surviving-ai-by-calum-chace-is-must.html

Will Computers Put Journalists Out Of Business? Check Out These 7 Stories

https://pioneerproductions.blogspot.com/2016/04/will-computers-put-journalists-out-of.html


No comments:

Popular Posts