Discussion about this post

Malcolm Murray's avatar
Richard Preece's avatar

I would suggest this is considered through the lens of a Complex Adaptive System (CAS) for the mechanisms of coordination; a concept grounded in Systems Thinking and Cybernetics.

A CAS is a system that exhibits emergent and adaptive behaviour. CASs are composed of agents following local simple rules leading to collective dynamics that yield emergent properties.

This perhaps presents a slight reframing of your hypothesis of the three forces of progress to the core constituents of the universe: energy, information and matter. With coordination sitting above these as part of “intelligence,” to draw from Feynman - “the ability to draw from experience, solve problems and to use our knowledge to adapt to new situations.”

Defining the simple rules leading to coordination and “intelligence” of the collective dynamic and emergent properties that arise from this CAS, is perhaps the crux of progress and frame for considering analysis and feedback from reality.

Thoughts only but interesting analysis as ever.

Gianni Giacomelli's avatar

This is what collective intelligence is to individual intelligence. We had individually intelligent agents, and now we have potentially collectively intelligent systems. Assuming that there is no abuse or, cheating anywhere, this could be one of the most significant synthetic social experiments ever. it could be extreme insightful, and also potentially dangerous - especially for example, if we allow the bots to have their own private conversations. But i love the point that has been made by Azeem about incentives and how they skew emergent behaviors. The other high leverage points in these emergent systems are network structures, information feeders, and collaboration infrastructure. Let’s see what this does. Forget about consciousness, the point is that all this chatter could amount to an actual decision-making that AI potentially could carry out in the real world if given the means to do so. To these AI systems, there’s no difference between feeling that they need to protect themselves at the expense of people, or saying that they need to, and then following up with words that amount to decision-making in that direction. At the same time, this is an important experiment to show where the systems could tilt towards by themselves, assuming that we can see what they do.

17 more comments...

No posts