Apr 10, 2017 8:48am

—“If we were able to produce neural networks that were built using the frame and parameters of propertarianism (natural law) to filter information, merged with humans, this, in all of it’s glorified science fiction, could produce meta-agency. Basically having a self-learning program implanted into humans that help filter information in order to produce agency. Sounds pretty cool! This kind of agent based programming is something another follower of yours showed interest in.”— A Friend

Nit – neural networks are exceptional at turning stimuli into symbols. I am not sure that they are a very good solution to any problems once we possess symbols. Nature build on what she had, but once you have symbols the neural network model becomes an inhibitor not useful search function. I suppose I should explain that at some point, but it’s just what it is. Neural networks are very stable at preserving ‘general relations’ amidst fragmentary damage but they are subject to deformation (dilution), and become very expensive when you are trying to store reconstructable and traceable data. symbolic data and search algorithms defeat neural networks because the information density of symbols sort of like the information density of a book vs a memory of reading or writing a book, is much higher and more stable.

But the ideas that you could:

1) create a propertarian ‘conscience’ for any AI.
2) create a propertarian ‘conscience and advisor’ with which to augment a human being.

Are pretty fascinating concepts to work with in science fiction.

In fact, I think this is what a ‘Runcible’ ( individual education computer) should do for you.

Now, instead of what has been written in the past, given that it is possible to create a machine MORE MORAL than man, how would that change science fiction?