The way humans determine permissible and impermissible actions is a test of reciprocity, and we determine it by demonstrated investment of time effort and resources, and we categorize such investments as interests from self, to kin, to property, to shareholder interests, to interests in the physical commons, to interest in the institutional, normative, traditional, and informational commons.

We do this every day. All day. In every human society. In all societies of record.

Just as we converge on Aristotelian language (mathematical measurement of constant relations, scientific due diligence against ignorance, error, bias, and deceit, and legal testimony in operational language), we converge on sovereignty, reciprocity, and property as the unit of measure that is calculable.

In all social orders of any complexity the test of property is ‘title’.

The problem for any computational method we wish to limit an artificial intelligence to constraints within, is the homogeneity of property definitions within a polity, and the heterogeneity of property definitions across a polity.

The problem of creating a convergence on the definition of property (and therefore commensurability) is that groups differ in competitive evolutionary strategies, just as do classes and genders (whose strategies are opposite but compatible.)

The reason you cannot and did not state a unit of measure (method of commensurability) is very likely because (judging from the language you use) you would find that unit of measure uncomfortable, because all humans have a desire to preserve room for ‘cheating’ (theft, fraud, free riding, conspiracy) so that they can avoid the effort and cost of productive, fully informed, warrantied, voluntary exchanges.

And the reason we do that – so many people do that – is marginal indifferences in value to one another.

I have been working on this problem since the early 1980’s and it still surprises me that the rather obvious evidence of economics and law is entirely ignored by philosophy just as cost, economics, and physics are ignored by philosophy and theology.

Machines cannot default as we do to intuition. They need a means of decidability, even if we call that ‘intuition’ (default choices).

I am an anti-philosophy philosopher in the sense that I expose pseudo-rationalism and pseudoscience for failures of completeness, because these failures of completeness are simply excuses for sloppy thinking, wishful thinking, suggestion, obscurantism and deceit.

Mathematics has terms of decidability, logic has terms of decidability, and algorithms must have terms of decidability, Accounting has terms of decidability, contracts have terms of decidability, ordinary language has terms of decidability, even fictions have terms of decidability (archetypes and plots).

Rule of law evolved to eliminate discretion and the dependence upon intuition, as did testimony as did science, as did mathematics, as did logic. Programming computers using hierarchical, relational, and textual databases tends to train human beings in the difference between computability, calculability (including deduction) and reason (reliance on intuition for decidability).

The human brain does a fairly good job of constantly solving for both predator (opportunity), and prey (risk) and our emotions evolved to describe the difference.

There is no reason that we cannot produce algorithms that do the same, using property(title) as a limit on action.
May 17, 2018 3:29pm