Picture of Herman Bruyninckx

Prof. dr. ir. Herman Bruyninckx

Full Professor (Gewoon Hoogleraar) at KU Leuven.
Partime Full Professor at Eindhoven University of Technology.
Member of the Flanders Make@KULeuven Core Lab “MPRO” (Motion Products).

Strongly prefered communication channel is email: herman.bruyninckx (@) kuleuven.be

Address:
Departement Werktuigkunde (KU Leuven)
Celestijnenlaan 300, bus 2420
B-3001 Heverlee (Leuven)
Belgium.

Room: 02.019 — Telephone (secretary): (+32) 16 32 24 80

ORCID identity: 0000-0003-3776-1025

Simple, though not easy…”
Location: Flanders, Blue Banana.

Short biography

I started my PhD in robotics in 1989, after obtaining master degrees in mathematical physics, computer science, and mechatronics. The context of my research was provided by the realtime sensor-based robot control research activities of Hendrik “Rik” van Brussel (who pioneered robotics in Leuven in 1975, and holonomic manufacturing systems from the 1990s) and Joris De Schutter (who identified the essential relations between “compliant motion” task specification, hybrid force/velocity control and the mechanical compliance properties of robot, tool and environment). Especially Joris was instrumental in my research towards introducing fundamental structures into sensor-based robot control: invariance of solutions with respect to changes in reference frames and physical units; redundancy and constraint resolution; task specifications distributed over several locations on robot, tools and environment, formally represented as constrained optimization problems; integration of uncertainty in task specification and control, in the form of, both, Bayesian probability theory and multi-hypothesis world models.

As an early fulltime adopter of Linux from 1994 on, and after postdoc stays at the University of Pennsylvania (1996) and Stanford University (1999), I decided to make software engineering for robotics one of my research focuses, and vendor-neutral open standards as one of my technology advocacy focuses. The natural outcome of both is quite some seminal open source software for realtime, distributed and heterogeneous system-of-system robotics applications.

Since 2010, I started research on formal knowledge representation in robotics, in the form of hierarchically and heterarchically interdependent property graph models. The aim of this effort is to design robots that can exploit these knowledge graphs themselves (and not only their human software developers) to discover each other, to find out with whom to cooperate on tasks that can (or have to) be shared, and to explain each of their decisions, to themselves, to their peer robots, and to human stakeholders (developers, users, certifiers and regulators).

The major lesson I learned after 30+ years of making robotics applications is that overall progress in robotics has slowed down tremendously, with even, in times of the “deep learning” hype, an amazing amount of “forgotten insights”! in the community mainstream. The major cause of this evolution is that we fail to give our robots enough high-level context, through which to integrate the essential five aspects of any robotic application: world models, formal task specifications, perception, control and situational aware skills. These aspects must be tackled, at the same time and in an integrated way, at a dozen or so “levels of abstraction”, coupled by “5Cs” software architectures. (That is, the Composition of Computation, Communication, Coordination and Configuration).

Explaining that integration is something that goes beyond the limits of conference or journal publications, which is why I started a longterm dissemination project, in the form of an online “work in progress” book about the above-mentioned “5Cs”: Building blocks for complicated and situational aware robotic and cyber-physical systems.

Mission: embed “deep knowledge” in robotic systems-of-systems

My research mission focuses on integrating formally represented domain knowledge into a robot's control software, with a special emphasis on efficient realtime and on-line behaviour. In other words, my quest is that of the holy grail of the (mythical) “robotics ontology” that robots can use themselves, when interacting with each other

This puts my research in the (currently still sparsely populated) corner of Artificial Intelligence that is opposite (or, rather, complementary) to the (currently over-popular and -populated) (deep) learning. (In the 1990s, the terms symbolic and sub-symbolic AI were in fashion; nowadays reinvented as neuro-symbolic AI.) My knowledge representations do include the relations with which the latter techniques can be integrated into any robotic system, in a systematic way. (Like any other AI technique for that matter, if one is capable of removing one's own hammer-and-nail tunnel vision.) That integration consists of how to configure the many “magic numbers” that the (so-called) “model-free” techniques require for a proper working in a specific task and application context. This configuration in itself requires quite some understanding of the intriciate dependencies between the perception, plan execution, control, decision making and monitoring activities in a robotic system.

Each layer of abstraction in your system model is worth a dozen convolution layers in your software cloud

Local ecosystem

My research takes place in close cooperation with Erwin Aertbeliën (super expert in dissecting and programming the most difficult robot tasks and in numerical solvers), Wilm Decré (my main liason with industrial projects), Joris De Schutter (former supervisor, co-creator of most of my “robot skills” R&D; he reads robotics challenges as no-one else), Goele Pipeleers (for solvers of constrained optimization problems), Jan Swevers (for all things control), René van de Molengraft (core co-creator of my approach towards system-of-systems architectures, and especially to “lazy” robot skills), Eric Demeester (for shared control and industrial robot vision), Peter Slaets (for unmanned (“autonomous”) shipping), Karel Kellens (for making robotics hardware), Manu vander Poorten (for making medical robots), Andrew vande Moere (main liason for challenging robotics R&D towards dealing with complexity simple enough to allow architects to start using them like a pensel or a sketching board), and a manageably small set of PhD students and postdocs. With all my KU Leuven colleagues, I have enjoyed the view from standing on the shoulders of our robotics founding father Rik van Brussel.

Research expertise

You can call me when you are in need of higher-than-average expertise in:

  • kinematics and dynamics of serial, parallel and hybrid robotic kinematic chains. Maybe your questions are already answered in my extensive online course notes on this topic…
  • situation-aware robot skills, based on my Task-Situation-Resource meta model.
  • integration of symbolic, discrete (“supervisory”) and continuous control, based on the hybrid constrained optimization meta model. With “hybrid” reflecting the fact that the controller models combine constraints and objective functions at all these three levels.
  • formal knowledge representations, which are a necessary condition for the former item. I have a strong preference for using the web standards JSON-LD, RDF, SHACL and SPARQL.
  • knowledge-based sensor processing, based on the paradigms of (Bayesian and other) information theory and perception association hierachies. My interests go from Kalman Filters for motion estimation to Conditional Random Fields for computer vision, and from numerical approaches (based on PDFs) to symbolic ones (based on multiple hypotheses models with explicit I_don_t_know semantics).
  • models and software for complicated and situational aware robotic and cyber-physical system architectures. Maybe your questions are already answered in my extensive and work-in-progress on-line book.
  • digital twins: I consider this terminology to be the “modern” version of what used to be called components (or “intelligent agents”) in holonic system architectures: software representations of parts of the real world, that reflect what is going or in the real world, or what is possible as actions on that real world. My focus is on the (hard) realtime software agents, that can keep up with the natural dynamic of robotic structures.
  • (realtime) Linux, or, more generally, speaking, consultancy to industrial users about what can be done with Free and Open Source Software, and especially also about how to do it.

Crises are opportunities

One causality is worth a thousand correlations

I used to call myself a roboticist, but fund-seeking adaptability has helped me transition into an self-explaining AI researcher :-)

Robot vendors have helped a lot in this transition, by stubbornly keeping on building expensive robots that are too heavy, too stiff, and too precise. The result is that mainstream robotics programming has been reduced to the level of programming computer games in purely geometric worlds, with robots not even trying to interact with the physical world, even when that would bring high added value to their customers. So, there is a large window of opportunity to do fundamental research in “intelligent” (read: knowledge driven) and trustworthy physical interactions between “lousy robots” and the world around them.

Research

Publications

Here is the list provided by the KU Leuven library system. Contact me to get electronic copies of my publications if you can't find them online.

The publication that gets most of my attention since a couple of years (and will continue to do so for many more years to come) is my online “work-in-progress” book on Building blocks for the Design of Complicated Systems featuring Situational Awareness.

The origins of this book go back to the early 2000s, when I started to be active in promoting the introduction into the robotics domain of the separation of concerns concept. Originally inspired by Klas Nilsson to study the 4Cs (of Radestock and Eisenbach, 1996), which I refined into the 5Cs: Computation, Communication, Coordination, Composition, and Configuration.

A key figure in my current research is shown to the right. I call it the Task-Situation-Resource (TSR) meta model, and it gives the interaction structure of the Computational components needed to build Skills. A Skill is the component that contains all the knowledge to let a robotic system execute its Tasks, with the given Resources, and under the hypothesis that it has correctly perceived the actual Situation. Another Research Question deals with how to create the behaviour inside the perception and control components.

You can find my course notes on robot kinematics and dynamics online. I tried very hard to integrate all historical references in the domain. Please, let me know if you find seminal work that I missed.

Task-Situation-Resource meta model for skills

Research question

How can robot programming, control, perception, and learning be made more knowledge-driven (“affordance-based”), exploiting all prior knowledge (about the tasks, the robots, the objects they interact with, and the environment they have to survive in) for (i) realtime execution, and (ii) self-explanation of all runtime decisions?

Preliminary answer: by exploiting (Bayesian and other) information theory, and systems & control theory, to model all sensory-motor interactions, and to embed their runtime exploitation by means of model-predictive controllers and moving-horizon estimators.

Summary of results: the information and software architectures for the motion stack and the perception stack of robotic systems-of-systems are extremely similar, composable and formally verifiable. The “knowledge” is represented as constraints between parameters in the association model. This model is necessarily hierarchical, in the sense that it must be possible to let different sets of knowledge constraints apply to different parts of the model; this is a constructive approach to introduce the all-important concept of “context”.

Motion control association hierarchy

Motion control association hierarchy. The context must be shared with the perception model.

Perception association hierarchy

Perception association hierarchy.

Research driver

Society's expectation to be provided with trustworthy robots only

Instead of chasing one of the many non-constructive “definitions” of levels of autonomy (like “Sheridan's 10”, Parasuraman, Sheridan & Wickens, IEEE Trans. Systems, Man, Cybernetics, 2000) we should design robots that can pass the “Trustworthy Turing Test” (TTT). A necessary condition is that a robot is self-explainable, that is, it is always able to answer the following questions:

Level Description
One system — One task (“Self awareness”)
1 What am I doing?
2 Why am I doing it?
Or, why am I involved in the first place?
3 How am I doing it,…
4   …and how well am I doing it…
4b   …and how do I decide to stop doing it?
One system — Multiple tasks (“Situation awareness”)
5 What could I be doing instead,…
5b   …and still be useful,…
5c   …and how do I decide to switch what I am doing?
6 What is threatening my progress,…
6b   …and how can I make myself resilient,…
6c   …and how do I decide to add a particular resilience?
Multiple systems — Multiple tasks (“Empathy”)
7 What progress of others am I threatening,…
7b   …and how can I make myself behave better,…
7c   …and how do I decide to adapt a particular better behaviour?
8 What other machines and humans can I cooperate with,…
8a   …and how do I find out how can we coordinate our cooperation,…
8b   …and how do we decide, together, what coordination to adopt,…
8c   …and how do we monitor our coordination,…
8d   …and how do we decide that someone has cooperation problems?,…

The answers to these questions help human observers to assess whether the system is “aware” of the context, purpose and consequences of its actions, and the motivations behind its decisions.

The last time I looked, the state of the art in robotics was still at level 0…

These TTTs are tough to realise because it requires thorough scientific methods, but it is my hypothesis that they are the only way to build “AI” technology that is refutable/falsifiable, respects causality, and is predictable to the extend of being certifiable. In other words, robotic system developers can only claim to be working ethically if and only if they strive for full TTTs in all their systems.

Research question

What is the essential and minimal structure to model the functional aspects of robotic systems, such that:

  • all entities, relations, and constraints are given a unique and semantically unambiguous place.
  • no model must ever be changed when composed into a larger system, except for some configuration of parameters in the model.
  • all physical constraints can be covered: energy sources, power transformation to the mechanical domain, mechanical transmmissions, joints, kinematic chains.
  • all artificial constraints can be covered: tasks for individual robots as well as (cooperative, cyber physical) systems of robots.
  • the knowledge representation (and hence the programming of robots) becomes a lot more semantically consistent.
  • and (hence!) deterministic,
  • and (hence?) self-explainable,
  • and (hence?) verifiable,
  • and (hence!) certifiable,
  • and (hence?) societally trustworthy.

Preliminary answer: by creating lots of small ontologies, with their Primitives, Relationships, Constraints and Tolerances encoded in languages such as JSON-LD, that support N-ary relationships and context-specific hierarchical composition as first-class citizens.

Summary of results: I was a key creator of the BRICS Component Model, and of its successor the “5Cs” System Composition Pattern, which is a scientific paradigm to support the design, development, deployment and runtime adaptation of complex robotics and other cyber-physical) systems.

Research question

Which mechatronics design paradigm can provide cheap, light and safe (hence, “lousy”) robot hardware? (Because) this is a necessary evolution before robotics platforms can become a cheap commodity.

Which task specification and execution paradigm can make such “lousy” robot hardware “good enough” for commodity tasks?

Preliminary answer: confidential, for now.

Research question

What is the essential and minimal structure to model the software aspects of robotic systems?
What architectural patterns can help us cope with the exploding complexity in knowledge, task variations, and distribution over several sub-systems?

Preliminary answers: (i) by systematically applying a small set of system-of-systems composition patterns, (ii) by clean separation of the information, software and hardware architectures, and (iii) by generating the robots' motions more and more by preview and precognitive control.

The good news is that the same best practices and patterns apply to systems with or without interactions with the physical world; of course, my interests are especially towards systems that have as many such physical interactions as possible. Assembly operations with multiple tools, agro-food manipulations, human-in-the-robot assistive control,… are primary examples.

Preview control is the “information architectural” model behind Model-predictive Control (MPC) (and its estimation dual Moving Horizon Estimation); it adds a symbolic/modelling part to the numerical robot state, to represent the task-level aspects of intentions, progress and benefits of an ongoing robot action, to allow making decisions about altering that control on the basis of what the future is expected to bring. This information could be obtained as a side-effect of the control-level optimizations done in an MPC, by solving Constrained Optimization Problem using finite horizons over time and state space. The MPC state in itself can already be “hybrid”, in that it contains discrete as well as continuous parts; the symbolic part is involved in a “Constraint Satisfaction Problem” which is solved by reasoning systems and extends the MPC with a closed world of knowledge relationships (which is the symbolic equivalent of a “finite horizon”).

When the robot itself is able to fill in the symbolic information in the preview control model, we call this control mode pre-cognitive control. We're not there yet…

Deprecated answer: I started the Open RObot COntrol Software (OROCOS) project in 2001, with later “spin-off” projects Kinematics and Dynamics Library (KDL) and Bayesian Filtering Library (BFL). My first PhD student on this material, Peter Soetens, started his spin-off company Intermodalics in 2010, on the basis of his unique expertise with the Orocos code.

Research questions “ICT in Society”

The following questions can not really be called “research questions”, because the answers are obvious. But, the last time I looked, these answers are not accessible in state of the art publications, and not at all in educational curricula. So, I publish the questions here, and leave the answers as a warming-up exercise to the reader.

Why don't highly-educated decision makers (yes rectors, CEOs and ministers, it's you I'm referring to…) understand that an ICT platform that is only accessible with a login creates huge ICT monopolies like Microsoft (Teams, Office365), Apple (AppStore/iTunes, iOS), Google (Play, Android), Facebook, Twitter or LinkedIn? Aren't these the same people who do understand (I think…) why it does not make sense to let one login into the telephone system, the internet, or the World Wide Web? Or are they collectively victims of the shifting baseline syndrome, that causes them to measure “progress” by referencing the wrong “best practices”, and/or moments in the past?

Why do universities and government organisations put clickable logo's of the above-mentioned companies on all their websites, and hence provide them with free advertisement space, and introduce ICT discrimination, stimulate inequality, and reduce diversity?

Why don't highly-educated decision makers understand the fallacies of the free market, and hence fail to create ICT regulation and guidelines that lead to free market ICT platform with fair entrance and competition conditions? What is so difficult about the golden rule that a provider of a platform (ICT and others) should never be allowed to become a provider of services on top of that platform? And why should decision makers never allow platform providers to put a password on the platform to access the data that actually are owned by the user?

Why don't they understand that all the investments they make to close the digital divide only makes it larger? For example, do they really believe that providing money to schools to use Microsoft Word/Excel, Apple FaceTime or the MathWorks Matlab helps the pupils to become empowered IT users in their later life? Maybe they don't realise that the moment these students leave school and start their career, their employers have to cough up several thousands of euros immediately just to let them continue with the same ICT habits they were drilled for while at school?

Projects

Current international projects in the European Union's H2020 Programme: IMBALS (2018–2022).

Flanders Make is the main driver of technology transfer to the manufacturing and mechatronics industries in Flanders. More in particular, Flanders make lowers the threshold for companies to introduce or further integrate robotics technology.

Lessons learned from past projects: BRICS (Best Practice in Robotics, 2009–2013) and Rosetta (Robot control for skilled execution of tasks in natural interaction with humans; based on autonomy, cumulative knowledge and learning, 2009–2013) have helped me understand what step changes are required in the domains of, respectively, cyber physical and robotics systems software engineering and task specification. In Pick-n-Pack (2012–2016), and Ropod (2017–2020), the insights gained in the above-mentioned projects were turned into innovative software solutions, in the contexts of “robotics” for, respectively, food production lines and hospital logistics. RoboHow (2012–2016) complemented the above-mentioned ones by (at that time, preliminary versions of) formal representations of the knowledge of robot motions and tasks. Sherpa (2013–2017) and the ongoing H2020 projects allow the positively brutal confrontation of our insights with the real world of various challenging application domains, and with the strong but highly justified requirements from end-users and industrial integrators.

Relevant project: IMBALS

(IMage BAsed Landing Solutions, H2020 project (Grant agreement ID: 785459) in the Clean Sky2 programme.)

In this project, we realise the semantic localisation ambition, in the context of the paradigm sketched in Research question, starting from the knowledge that is available about runways. This airport infrastructure is equiped with a large variety of visual markers that are (i) internationally standardized, and (ii) design for optimal visual recognition. More in particular, we try to make software for each of the various associations, where the association knowledge results in the “appropriate” configuration of (i) the magic numbers in the many visual feature detectors that are needed in the application, (ii) the coordination of the order in which the various detectors are executed, and (iii) the decision making, to map the visual detector outcomes to a Quality of Service suggestion to the pilot about how well the plane is following the planned landing procedure.

A partial result is sketched below: for the landing phase where the plane is already on the runway, the major quality metric is the deviation from the center line of the runway.

Runway marking segmentation. Image courtesy of Airbus; annotations by KU Leuven.

Runway marking segmentation.
Image courtesy of Airbus,
Vision processing annotations by KU Leuven.

Our developments must be integrated with those of the other partners:

  • the QoS estimation must be visualised on the cockpit displays of ScioTeq (Kortrijk, Belgium).
  • we develop vision processing algorithms on “desktop” hardware, and (UN)MANNED transforms these to their certified Sol platform.
  • Airbus is the primary driver and stakeholder of the developments.
Education

Teaching

Approach

The contributions to the education of our young engineers that I value most are my emphasis on (i) system-level thinking, and (ii) attitude of constructively critical evaluation of all available sources of information, starting with pseudo-peer reviewed open content such as the Wikipedia. Our students (and staff…) typically score poorly on both aspects, which I think are fundamental for Europe's ability to maintain an innovative R&D ecosystem. The future does not belong to those who posess the most knowledge, but to those who are able to understand how and where to apply that knowledge.

My most “revolutionary” contribution to education is to use professional mailing lists as first-class teaching tool: this is the most effective (albeit labour intensive) online approach to provide learning feedback to students on an individual basis, answering to their problems when they are ready for it. This best practice comes directly from my long-term, intensive immersion in, and contributions to, the Free and Open Source Software community.

Universities recently started to promote asocial media technologies as revolutionary additions to traditional educational practice. I'm sorry, but that technology existed (and has been used, by myself and many others) already a long time before companies like Facebook, Twitter and Blackboard made that technology proprietary and put it behind a passwords.

A recent presentation about the systematic use of Web standards in my blended learning approach can be found here.

Continuous education to individuals, organisations and companies

I consider an academic degree as a more than decent starting point for a professional career, but nothing more. Hence, I offer state of the art update classes in all areas of my expertise, at consultancy rate fees. Individuals, organisations (press, governmental administations,…) and companies can apply, starting from half a day to three day courses.

The fees go for 100% to KU Leuven or TU Eindhoven funds to support my research. Both universities have selected continuous education as one of their major missions, but have not yet been able to provide effective solutions to continuous education of alumni: very few of them ever come back to their alma mater (or another academic institute), and update their knowledge. So, mine is a humble contribution to that mission.

People

PhD students

PhD alumni

Bio

Partime Full Professor at TU Eindhoven since February 2014.
Doctor honoris causa (“æresdoktor”) of the University of Southern Denmark (Syddansk Universitet, Odense, Denmark), October 3, 2014.
Full Professor at KU Leuven since October 2013.
Professor at KU Leuven since October 2008.
Associate Professor at KU Leuven since October 2002.
Assistant Professor at KU Leuven since October 1998.

From 2008 to 2015, I have been leading the robotics community in Europe, first as Coordinator of the seminal network EURON, and in 2013–2015 as Vice-President Research of the euRobotics association. From its inception in 2000 until 2017, I have been a member of the Jury for the association's Georges Giralt PhD Award. From 2008 till 2014, I acted as the Chairman of this Jury.

July–August 2002: sabatical at the Centre for Autonomous Systems in the Royal Institute of Technology in Stockholm, Sweden, with Prof. Henrik Christensen.
April–August 1999: sabatical with the Robotics group at Stanford University, with Prof. Oussama Khatib.
Assistant Professor at KU Leuven since October 1998.
Mar. 1996–Aug. 1996: Postdoc at GRASP Lab, University of Pennsylvania, Philadelphia, U.S.A., with Vijay Kumar.
1995–2003: Postdoctoral Fellow of the Fund for Scientific Research (FWO) in Flanders.
1989–1995: Research Assistant at the KU Leuven, Department of Mechanical Engineering.
PhD (1995), Kinematic Models for Robot Compliant Motion with Identification of Uncertainties, under supervision of Joris De Schutter.

Military service (1988–1989).

Master-after-Master Mechatronics (1988).
Master (“Burgerlijk Ingenieur”) Computer Science (1987).
Master (“Licentiate”) Mathematics (1984).

Working with me

If you want to come and work with me, I expect you to be a full-time user of Linux, text-based editors (such as Vim), text-based email with inline posting, version control systems (e.g., git), LaTex, and HTML5 and SVG (for documents as well as present/ations). Contributions to Free and Open Source Software projects are very much stimulated, and contributions to “a friendly Wikipedia page in your neighbourhood” should become part of your daily hygiene.

The most important thing I can offer to potential post docs is a lot of opportunities to get immersed into the most vibrant core of the Dutch-Flemish robotics research scene, academic as well as industrial, including lots of interactions with dozens of robotics groups in Europe.

I am a firm believer in the maturity and self-driving pro-activeness of master and PhD students. Hence, I do not want to be their “supervisor” but rather their “somewhat more experienced coach”. And I expect them to have always a clear idea about where exactly they want to go with their research. My rule of thumb for a researcher is to have 2–3 research hypotheses written out in full, at all times. Not only to explain to visitors what one's research is all about, but also to keep one's strength and self-confidence, since I flood them continuously with (only potentially) good ideas, papers and software, with constructive criticism, and with stimuli to “think weird” and “design big”. I do realise that such a turmoil of disrupting scientific discussions can cause doubts, takes some time to adapt to, and requires strong nerves to keep one's research focus. However, I do not apologize for this behaviour of mine because practice has shown, over and over again, that people get stronger in the process. (Or they realise that research is not their cup of tea.)

Internship – Master thesis

In the context of the EU's Erasmus+ student exchange programme, I welcome internship and master thesis students from universities all over Europe, for a project in one of our many small robotics research teams of PhD students and postdocs.

Topics range from making mechatronic prototypes with, for example, VESC motor control units, to computer vision algorithms embedded in knowledge-driven robotic task executions.

Hence, I'm especially interested in students with a solid background in mathematics (numerical linear algebra, geometry), and software engineering (Linux, C, Lua, HTML5, and, to a lesser extent, “everything and the kitchen sink” wrong-level languages such as C++ or Java). Even more so when they have experience in the usage of, and the contribution to, Free and Open Source Software projects, to make the latter better suited for robotics.

(Lack of) ICT skills

Most educational systems worldwide are government-funded training centres for Microsoft Office, which is a major showstopper for the ICT empowerment of our society. This is about much more than just using a computer now for what you did 20 years ago already with pen, paper and file cabinets. It's all about standing on the shoulders of thousands and thousands of open source midgets and giants. A good start is to throw out that Outlook programme of yours, because it only supports top posting, sigh… And to

The best thing HR managers can do with the administrative collaborators in their organisation is to help them create and edit documents as HTML+CSS, and replace Excel files with a real database, like PostgreSQL. It will not be easy, but quickly they will not want to return anymore to the old days when information was shared by sending around Word documents or Excel files, and then trying to keep that information updated, consistent, and reusable in the ever-changing world around them.

Miscellaneous

I do not take part in asocial media, because of pragmatic and ethical principles: these initiatives introduce proprietary protocols and/or prevent bias-free, non-polarized, inter-community, multi-vendor interactions. The inevitable result is to create monopolies, and hence prevent fair markets of VOIP or social networking as emerging communication instruments. It's only 40 years ago that our society succeeded to escape from the traditional telecom monopolies, but it seems not to have learned anything from those experiences…

If you want to invest in fairness, diversity and freedom, rely only on Open Standards formats, such as WebRTC, and user-friendly implementations of it such as JitsiMeet. Open standards in ICT improve vendor and software independence, and long-term archiving and decoupling of ICT solutions, etc.

So, please, communicate with me only via plain text, HTML, PDF, or ODF.

If you are really concerned about a better world, get rid of your car and become a vegetarian/vegan: all other options are difficult to motivate wastes of energy, health and public space.

Spelling mistakes in “ICT” that cost society billions…

What we write… What we get…
digital native digital naive
social media asocial media
decision makers decision takers
ICT skills Microsoft Teams
integration sub-internets feudally gated by Microsoft, Google, Meta/Facebook, Apple, and Amazon.
This material is Open Knowlege.
Keep knowledge public
Support the Wikipedia Foundation.
Support Wikipedia
Use HTML everywhere