Algocracy

Can Algorithms Run Things Better Than Humans?

Welcome to the rise of the algocracy.

|

Police in Orlando, Florida, are using a powerful new tool to identify and track folks in real time. Video streams from four cameras located at police headquarters, three in the city's downtown area, and one outside of a recreation center will be processed through Amazon's Rekognition technology, which has been developed through deep learning algorithms trained using millions of images to identify and sort faces. The tool is astoundingly cheap: Orlando Police spent only $30.99 to process 30,989 images, according to the American Civil Liberties Union (ACLU). For now the test involves only police officers who have volunteered for the trial.

But the company has big plans for the program. In a June meeting with Immigration and Customs Enforcement (ICE), Amazon Web Services pitched the tech as part of a system of mass surveillance that could identify and track unauthorized immigrants, their families, and their friends, according to records obtained by the Project on Government Oversight.

Once ICE develops the infrastructure for video surveillance and real-time biometric monitoring, other agencies, such as the FBI, the Drug Enforcement Administration, and local police, will no doubt argue that they should be able to access mass surveillance technologies too.

Amazon boasts the tool is already helping with everything from minimizing package theft to tracking down sex traffickers, and the company points to its terms of use, which prohibit illegal violations of privacy, to assuage fears.

As impressive as Rekognition is, it's not perfect. The same ACLU report found that a test of the technology erroneously matched 28 members of Congress with criminal mugshots. Being falsely identified as a suspect by facial recognition technology, prompting police to detain you on your stroll down a street while minding your own business, would annoy anybody. Being mistakenly identified as a felon who may be armed would put you in danger of aggressive, perhaps fatal, police intervention.

Are you willing to trust your life and liberty to emerging algorithmic governance technologies such as Rekognition? The activities and motives of a police officer or bureaucrat can be scrutinized and understood by citizens. But decisions made by ever-more-complex algorithms trained on vast data sets likely will become increasingly opaque and thus insulated from public oversight. Even if the outcomes seem fair and beneficial, will people really accept important decisions about their lives being made this way—and, as important, should they?

Enter the Witness

In Nick Harkaway's gnarly near-future science fiction novel Gnomon, Britain is protected by "the perfect police force"—in a pervasive yet apparently benign total surveillance state—called the Witness. "Over five hundred million cameras, microphones and other sensors taking information from everywhere, not one instant of it accessed initially by any human being," explains the narrator. "Instead, the impartial, self-teaching algorithms of the Witness review and classify [the inputs] and do nothing unless public safety requires it.…It sees, it understands, and very occasionally it acts, but otherwise it is resolutely invisible."

When it comes to crime, the Witness identifies incipient telltale signs of future illegal behavior and then intervenes to prevent it. The system "does not take refuge behind the lace curtain of noninterference in personal business.…Everyone is equally seen." The result is that it delivers "security of the self to citizens at a level unprecedented in history," and "all citizens understand its worth."

The Witness is a fictional example of what National University of Ireland Galway law lecturer John Danaher calls algocracy—algorithmic governance that uses data mining and predictive/descriptive analytics to constrain and control human behavior. (Broadly speaking, an algorithm is a step-by-step procedure for solving a problem or accomplishing a goal. A mundane example is a recipe for baking a cake.)

The exponential growth of digital sensors, computational devices, and communication technology is flooding the world with data. To make sense of all this new information, Danaher observes, humans are turning to the impressive capabilities of machine-learning algorithms to facilitate data-driven decision making. "The potential here is vast," he writes. "Algorithmic governance systems could, according to some researchers, be faster, more efficient and less biased than traditional human-led decision-making systems."

Danaher analogizes algocracy to epistocracy—that is, rule by the wise. And epistocracy is not too dissimilar from the early 20th century Progressive idea that corruptly partisan democratic governance should be "rationalized," or controlled by efficient bureaucracies staffed with objective and knowledgeable experts.

If rule by experts is good, wouldn't rule by impartial, infallible computers be better? "Bureaucracies are in effect algorithms created by technocrats that systematize governance," argues James Hughes, executive director of the Institute for Ethics and Emerging Technologies. "Their automation simply removes bureaucrats and paper."

Of course, what makes the Witness potent is that when its ever-watchful algorithms spot untoward behavior, they can direct human inspectors to intervene. But the narrator in Gnomon assures us that all citizens understand and accept the Witness' omnibenevolent surveillance and guidance.

The Power of 163 Zettabytes of Data

It's not too early to ask how close we are to living in a hypernudging algocratic surveillance regime. The construction of infrastructure to support something like the Witness is certainly proceeding apace: Nearly 6 million closed circuit TV (CCTV) cameras keep an eye on public spaces in the United Kingdom. By one estimate, the average Londoner is caught on camera 300 times per day. Data on the number of public and private surveillance cameras deployed in the U.S. is spotty, but the best estimate, from the global business intelligence consultancy IHS Markit, is that there were at least 62 million of them in 2016. Police authorities monitor roughly 20,000 CCTVs in Manhattan, while Chicago boasts a network of 32,000 such devices.

As intrusive as it is, video surveillance today is mostly passive, since there are simply not enough watchers to keep track of the massive amounts of video generated by camera feeds in real time. Video from the CCTVs is generally more useful for solving crimes after the fact than for preventing them. However, a 2016 Stanford University study on artificial intelligence predicts that increasingly accurate algorithmic processing of video from increasingly pervasive CCTV networks will, by 2030, be able to efficiently detect anomalies as they happen in streets, ports, airports, coastal areas, waterways, and industrial facilities.

Nascent facial recognition technologies such as Amazon's Rekognition, IBM's Watson Visual Recognition, and Microsoft's Azure Face API in the U.S. and Megvii and SenseTime in China are still quite clunky, but they're improving rapidly. In August, the Department of Homeland Security began rolling out its Traveler Verification Service (TVS). To verify passengers are who they claim to be, the system matches photographs taken by Customs and Border Protection (CBP) cameras at airports with a cloud-based database of photos previously captured by the CBP during entry inspections, photos from previous encounters with Homeland Security, and photos from the Department of State, including U.S. passport and U.S. visa photos. The agency claims that TVS inspection cuts the wait in security checkpoint lines in half.

Electronic billboards publicly shame those with low social credit while lionizing high scorers. By making each citizen's score publicly accessible, the Chinese government hopes to forge "a public opinion environment where keeping trust is glorious."

As pervasive as they will become, CCTV cameras will actually constitute just a minor segment of the surveillance infrastructure, much of which we are assembling voluntarily. The International Data Corporation projects that an average connected person anywhere in the world in 2025 will interact with connected digital devices nearly 4,800 times per day, or one interaction every 18 seconds. These devices will be part of the omnipresent "internet of everything," which will include security cameras, smartphones, wearable devices such as Fitbits, radio-frequency identification (RFID) readers, automated buildings, machine tools, vending machines, payment systems, self-driving vehicles, digital signage, medical implants, and even toys. As a result, humanity in 2025 will be generating 163 zettabytes (163 trillion gigabytes) of data annually, a tenfold increase from the amount released into the global datasphere in 2016.

Machine learning provides computational systems with the ability to improve from experience without being explicitly programmed. The technique automatically parses pertinent databases, enabling algorithms to correct for previous errors and make more relevant choices over time. The rising flood of data will train these always-improving algorithms to more quickly detect subtle behavioral anomalies, to include identifying actions and activities that government actors view as undesirable. Based on the results, authorities might seek to intervene before an individual causes harm to himself (through unhealthy eating habits, for example) or others (by committing fraud or even violence).

With Liberty and Justice for All?

In his 2017 essay "From Remote-Controlled to Self-Controlled Citizens," Swiss Federal Institute of Technology in Zurich computational sociologist Dirk Helbing highlights how wielding such algorithmic power can impair individual liberty. He acknowledges that the torrents of cheap and abundant data generated by omnipresent digital appliances and sensors need to be filtered in order to be usable.

"Those who build these filters will determine what we see," Helbing warns. "This creates possibilities to influence people's decisions such that they become remotely controlled rather than make their decisions on their own."

This effort need not be explicitly coercive. Authorities might use personal data to "nudge" people's decisions so that they will behave in ways deemed more beneficial and appropriate. Already, Helbing says, "our behavior is increasingly steered by technology."

Commercial enterprises from Amazon and Alibaba to Apple and Microsoft to Twitter and Facebook now exercise this sort of nudging power over their users. Algorithms rank web pages to serve up more relevant search results on Google, make book recommendations at Amazon, offer compatible relationship possibilities at Match.com, operate Super Cruise to brake our cars when sensors detect obstacles ahead, alert people when an untagged photo of them is posted on Facebook, generate playlists at Spotify, and advise us on how to avoid traffic tie-ups.

Although these algorithms do sometimes lead to bum dates and boring Netflix picks, for the most part we're willing participants in what Harvard Business School social psychologist Shoshana Zuboff decries as "surveillance capitalism." We supply our personal information to private companies that then use their data-parsing technologies to offer tailored suggestions for services and products that they hope will fulfill our needs and desires. We do this because most of us find that the algorithms developed and deployed by commercial enterprises are much more likely to be helpful than harmful.

As a consequence, we get locked into the "personalized" information and choices that are filtered through the algorithms behind our social media feeds and commercial recommendation engines.

Tools like the Gobo social media aggregator developed by MIT's Media Lab enable users to circumvent the algorithmic filters that their clicks on Facebook and Google build around them. By moving sliders like the one for politics from "my perspective" to "lots of perspectives," for example, a user can introduce news stories from sources he or she might not otherwise find.

But things can turn nasty when law enforcement and national security agencies demand and obtain access to the vast stores of personal data—including our online searches, our media viewing habits, our product purchases, our social media activities, our contact lists, and even our travel times and routes—amassed about each of us by commercial enterprises. Algorithms are already widely deployed in many agencies: The Social Security Administration uses them to calculate benefits and the Internal Revenue Service uses them to identify tax evaders. Police forces employ them to predict crime hot spots, while courts apply them to help make judgment calls about how arrestees and prisoners should be handled.

Scraping both commercial and government databases enables data-mining firms like Palantir Technologies to create "spidergrams" on people that show their connections to friends, neighbors, lovers, and business associates, plus their organizational memberships, travel choices, debt situations, records of court appearances, and more. The Los Angeles Police Department (LAPD) is using Palantir data parsing to track suspected gang members, Bloomberg Businessweek reported this year. Information from rap sheets, parole reports, police interview cards, automatic license plate readers, and other sources is used to generate a point score for each person of interest to police, and the scores are then incorporated into probable offender bulletins.

Like the fictional Witness, the LAPD is not pretending to a policy of noninterference in individuals' personal business. Beat cops use the bulletins to identify precrime suspects and then use minor violations, such as jaywalking or fix-it tickets for automobile mechanical faults, as a pretense to stop and question them. Data from these contacts are then entered back into Palantir's software.

The result, critics allege, is that this algorithmic surveillance process creates a self-justifying feedback loop. "An individual having a high point value is predictive of future police contact, and that police contact further increases the individual's point value," writes University of Texas sociologist Sarah Brayne in a 2017 study of LAPD criminal surveillance practices. Once an individual is trapped inside an LAPD spidergram, the only way to escape the web of extra police scrutiny is to leave the city. The ideal of algorithmic objectivity ascribed to software like Palantir's actually "hides both intentional and unintentional bias in policing and creates a self-perpetuating cycle."

Surveillance Communism

As discomfiting as surveillance capitalism can be—especially when turned to the benefit of a meddling state—the system of surveillance communism now being deployed by the People's Republic of China is far scarier.

Since 2014, the Chinese Communist Party has been developing and implementing something called a Social Credit System. The goal is to build "a harmonious socialist society" by establishing a "sincerity culture," which in turn is accomplished by using "encouragement to keep trust and constraints against breaking trust as incentive mechanisms."

At the core of this system of gamified obedience is a rating scheme in which the sincerity and trustworthiness of every citizen, as defined by the Chinese government, will be distilled into a three-digit social credit score. Like Harkaway's fictional Witness, artificial intelligence operating invisibly in the background will continuously sift through the enormous amounts of data generated by each individual, constantly updating every person's score. The system is slated to become fully operational for all 1.4 billion Chinese citizens by 2020.

Joanna Andreasson

The Social Credit System is a public-private partnership in which the government has contracted out much of the social and commercial surveillance to Chinese tech giants such as Alibaba, Tencent, and Baidu. "The information is gathered from many different sources of data like banks and financial institutions, stores, public transportation systems, Internet platforms, social media, and e-mail accounts," explain Copenhagen University philosophers Vincent Hendricks and Mads Vestergaard in their forthcoming book, Reality Lost (Springer). Factors include things like excessive time and money spent on video games vs. a propensity to donate blood. These commercial data points are combined with government records detailing an individual's adherence to family-planning directives, court orders, timely tax payments, academic honesty, and even traffic violations.

Keeping company with individuals with low social credit scores will negatively affect one's own number. Citizens wanting a better score will therefore have an incentive to either pressure their low-life friends into complying more fully with the norms of the Social Credit System or to ditch those friends entirely.

There are currently 170 million surveillance cameras with integrated facial recognition up and running in China. By 2020, the count will reach 570 million. "Digitalization and the Internet have enabled such massive data collection that surveillance may be almost total with no angles out of sight or blind spots," write Hendricks and Vestergaard. Like the Witness in Gnomon, the Social Credit System promises that "everyone is equally seen."

Earlier totalitarian regimes used fear, terror, and violence as means of social control. The Chinese state still has recourse to such measures, of course, but the new system is designed to operate chiefly by instilling self-discipline through incentives. Citizens who earn low scores by "breaking trust" will encounter limits on their internet usage and will be barred from restaurants, nightclubs, and luxury hotels. More consequentially, low scorers face restrictions on their access to housing, insurance, loans, social security benefits, and good schools; bans on air, high-speed rail, and foreign travel; and exclusion from jobs as civil servants, journalists, and lawyers.

Conversely, folks with high social credit scores have access to all of those benefits and more, including such mundane conveniences as no-deposit apartment and bicycle rentals. The goal is to "allow the trustworthy to roam everywhere under heaven, while making it hard for the discredited to take a single step," according to a statement from Beijing.

In some jurisdictions, electronic billboards publicly shame those with low social credit while lionizing high scorers. By making each citizen's score publicly accessible, the government hopes to forge "a public opinion environment where keeping trust is glorious."

In George Orwell's novel Nineteen Eighty-Four, the Thought Police are tasked with discovering and punishing thoughtcrime—that is, personal and political thoughts that have been proscribed by the Party. By motivating citizens to constantly monitor their own behavior and that of their associates with respect to the behavior's impact on their ratings, the Social Credit System decentralizes this enforcement. As each person calculates how to boost his or her score, adherence to the rules, regulations, and norms decreed by the authorities becomes an unconscious habit.

As Zhao Ruying, the bureaucrat in charge of implementing the system in Shanghai, has said, "We may reach the point where no one would even dare to think of breaching trust, a point where no one would even consider hurting the community. If we reached this point, our work would be done." The Chinese Communist Party intends for the thought police to reside in the head of each citizen.

Will the Social Credit System actually result in a high-trust society? Helbing, the computational sociologist, argues that supposedly omnibenevolent algorithmic control systems such as China's may look successful in the short run, but they will end up producing chaos rather than harmony.

Centralized systems are peculiarly vulnerable to hacking, corruption, and error, Helbing says. In Gnomon, it is revealed that a secret epistocracy has compromised the purportedly objective algorithmic decisions of the Witness, justifying the interference by saying it is needed in order to maintain order. Likewise, if the Central Committee of the Chinese Communist Party is seen as exercising excessively heavy-handed control over the Social Credit System, citizens may ultimately reject its legitimacy.

Aside from elite misbehavior, Helbing identifies three major flaws in this type of top-down governance. First, algorithmic micromanagement can destroy the basis of social stability and order in societies by undermining traditional institutions of self-organization, such as families, companies, churches, and nonprofit associations. Second, society-scale algorithmic nudging could narrow the scope of information and experience available to people, thus undercutting the "wisdom of the crowd" effect and herding people into making worse decisions, producing problems such as stock market bubbles or malignant nationalism. And third, he argues that imposing algorithmic choice architectures on society will dangerously reduce the economic and social diversity that is key to producing high innovation rates, marshaling collective intelligence, and sustaining social resilience to disruptions.

Helbing's diagnosis of the problems associated with algorithmic top-down control is persuasive. But his pet solution—Nervousnet, an open, participatory, bottom-up, distributed information platform for real-time data sensing and analysis devised by his colleagues at the Swiss Federal Institute of Technology—is much less compelling. Among other things, this alternative requires billions of concerned people across the globe to spontaneously yet self-consciously adopt the system. Good luck with that.

Tech vs. Tech

It may seem inevitable that, given advancing technology, society will eventually adopt some version of troubling algorithmically driven surveillance capitalism or communism. But perhaps that assumption is overly pessimistic.

The Institute for Ethics and Emerging Technologies' Hughes notes that any technology can be used for good or ill. "The application of algorithmic governance to Orwellian social control and the undermining of democratic processes is not an indictment of algorithms or the Internet," he wrote in a 2018 article for the Journal of Posthuman Studies, "but rather of authoritarian governments and the weaknesses of our current democratic technologies."

Moreover, Hughes believes technology can help prevent authoritarian abuses of algorithmic governance. He commends the salutary effects of new electronic tools for monitoring the activities of politicians and corporate cronies; one example is a suite of open-source artificial intelligence tools developed by the Iceland-based Citizens Foundation to enable policy crowdsourcing, participatory budgeting, and information curation to avoid filter bubbles. He also points to the existence of all sorts of new automated platforms for organizing and influencing policy decisions, including change.org, avaaz.org, Nationbuilder, and iCitizen.

At his most visionary, Hughes suggests that one day soon, cognitive enhancements in which citizen brains connected to machines offering expanded information processing and memory capabilities will be able to counter attempts to impose algorithmic tyranny. These "enhanced humans" would become conscious and objective participants in the datasphere, able to follow and endorse the logic of government algorithms for themselves.

In a recent white paper, Center for Data Innovation analysts Joshua New and Daniel Castro similarly warn against an anti-algocracy panic. "In most cases," they point out, "flawed algorithms hurt the organizations using them. Therefore, organizations have strong incentives to not use biased or otherwise flawed algorithmic decision-making and regulators are unlikely to need to intervene."

Market participants have every reason to want to get things right. If a financial institution makes bad lending decisions, for instance, the bankers lose money. Consequently, New and Castro reject policies such as mandating algorithmic explainability (i.e., forbidding the deployment of machine-learning systems until they can explain their rationales for the decisions they make) or the establishment of a centralized algorithm regulatory agency.

The authors do acknowledge that in cases where the cost of the error falls primarily on the subject of the algorithmic decision, these salubrious incentives may not exist. Government bureaucracies such as police and courts—which are not disciplined by the profit motive—largely occupy this category. To encourage bureaucratic accountability, New and Castro recommend that any algorithms involved in judicial decision making be open-source and subject to ongoing assessments of their social and economic consequences.

In Gnomon, the shadowy epistocracy is eventually overthrown when the Desperation Protocol, a long-hidden worm, shuts down the Witness. Pervasive algorithmic governance is gone, but so too are its many benefits. In the real world, the algocratic tradeoffs we will face are not likely to be so stark.

Advertisement

NEXT: Understanding the New Obamacare Decision, Texas v. United States: Part I

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. o?pha?noc?ra?cy

    / ?f??n?kr?s?/

    noun: ophanocracy

    a system of government by decentralized mechanisms, typically through cryptoeconomic systems.

    “Futarchy is an ophanocracy that uses the technique of evaluating policies ex post while relying on a prediction market to determine the best policy ex ante.”

    synonyms:governance solution stack, blockchain government

    Origin

    A word play on the “if men were angels” argument of Federalist #51 using the “wheel within a wheel” angels of Ezekiel 1:16.

  2. But, to paraphrase, “Who watches the algorithms?”

    Reading the part of this story about China’s “social credit scores” should answer that question.

  3. Rekognition.
    Sounds Russian to me.

  4. Who will write the algorithms?

    Anyone with passing exposure to the automoderation bots on Reddit should be horrified by this question.

    1. “Can a finite set of rules written by humans run things better than humans?”

      No, but if we give them a fancy name maybe we can fool the rubes.

  5. Algocracy? Well, I guess they can’t say algorocracy, that would sound like rule by the 2000 Presidential candidate.

  6. New and Castro are naive. Organizations are willfully self defeating because they are driven by individual goals. Not organizational.

  7. “Danaher analogizes algocracy to epistocracy?that is, rule by the wise. And epistocracy is not too dissimilar from the early 20th century Progressive idea that corruptly partisan democratic governance should be “rationalized,” or controlled by efficient bureaucracies staffed with objective and knowledgeable experts.

    If rule by experts is good, wouldn’t rule by impartial, infallible computers be better?”

    There are two interrelated problems, here.

    1) Qualitative preferences.

    By objectivity or lack of bias, they mean getting rid of qualitative preferences, but qualitative preferences are by no means a bad thing. Each of us has our own unique qualitative preference–even on otherwise universal values like safety. My qualitative preference for safety includes riding through traffic at high speeds on a motorcycle. I know people who travel through dangerous Central American countries on local buses as a means of adventure tourism. I know adults who wear helmets when they ride their bicycles on the strand (no car traffic). There is no way any bureaucrat can account for each of our own qualitative preferences–not even on a universal value like safety–and if machine learning can do it more efficiently than a bureaucrat, that’s hardly the issue. Again, even if machine learning can perform better from the one perspective of a central planner, the problem is that accounting for millions of different qualitative preferences is impossible.

    1. “Each of us has our own unique qualitative preference-Each of us has our own unique qualitative preference-”

      There’s nothing unique in a preference for high speed motorcycling or bus rides in Central America. A quick internet search should turn up dozens of individuals who share these a keen interest in both these pursuits.

      “the problem is that accounting for millions of different qualitative preferences is impossible.”

      Impossible? Why? Because 153 zettabytes is insufficient? I gave up underestimating computing capacity after AlphaGo beat the pants off the world champion a couple year back.

      1. “Because 153 zettabytes is insufficient?”

        Tell me how many variables each person can exhibit and I’ll tell you:

        A. That’s a low-ball figure, and
        B. That’s why 153 zettabytes is not remotely sufficient

        None of which includes the contrarians who, should they learn what the algorithm entails, will consciously alter their preferences accordingly.

        But, in the spirit of supporting individualism, I will encourage you to continue in your collectivist fantasy.

        1. Data mining is not a collectivist fantasy. Lots of time and money is being poured into it by very powerful actors. If you care about supporting individualism, why not ditch your smart phone, which can record your conversations, even if the phone is switched off and the main battery removed. Data mining is real and improving all the time. Don’t discount this out of some misguided belief in individualism.

          1. Right. Predicting the future is easy! Look how well the global warming crowd has done!

            Oh, wait….

  8. 2) The problem of perspective.

    There is a reason why the S&P 500 outperforms almost every fund manager of size–over the long term. It isn’t because fund managers aren’t as good at predicting the future as S&P 500 CEOs. It isn’t because fund managers aren’t properly motivated to outperform the index. It’s because S&P 500 CEOs are working simultaneously from 500 different perspectives. Hell, S&P 500 CEOs are often working to maximize profits at their competitors’ expense–and the S&P 500 still outperforms fund managers over the long term.

    Ultimately, we’re talking about people in power inflicting their own qualitative preferences from a perspective of power–only doing so more efficiently using machine language. Expect this to fail for all the same reasons that it always fails–because it’s impossible to account for the unique perspectives and qualitative preferences of 350 million individuals for abstract concepts like freedom and justice better than they can each account for it themselves.

  9. To err is human. To replicate the error a hundred thousand times a second requires a computer. It is important – arguably even crucial – to remember that the use of algorithms does not remove the human from the equation; it merely puts the human at one more (debatably illusory) remove. The algorithms provide the Stater with that (to the State) most priceless of commodities; plausible deniability. Those of us ancient enough to remember when computers were first being used for gilling will remember the almost universal excuse for billing related boneheadedness “It isn’t us, it’s the computer!”. But soon enough people realized what a crock THAT was and started replying “I don’t care whose fault it is, fix the fault or get sued.”

    I’ve been following the development of facial recognition from a distance (I lack the technical knowledge to follow it closely), and I notice that it keeps receding into the future. I strongly suspect that human faces are by no means as unique as we might like to think they are, especially when viewed through surveillance cameras under less than ideal conditions. I look forward will a kind of cynical glee to the reach of ‘wrongful arrest’ suits I confidently expect this nonsense to generate.

    1. Yes, the obvious problem with thinking algocracy would be better than human rule is that the algorithms are written by humans.

      1. The problem is that policy is written by people who think they understand the technology involved, and don’t.

        1. “The problem is that policy is written by people who think they understand the technology involved, and don’t.”

          That sounds like a fairly trivial problem of communication between those who write policy and those who write software. It should not be insurmountable, especially given the vast numbers of young people studying computers nowadays.

          1. ^^^ case in point ^^^

      2. the algorithms are written by humans

        The machine learning algorithms are written by humans: humans decide how the algorithms learn, not what the algorithms learn.

        1. ” humans decide how the algorithms learn, not what the algorithms learn.”

          Oh Lordy. In the first place, if I dictate how you are to learn then I very much control what you will learn. Inm practical use a telescope is not a microscope.

          Beyond that algorithms do not learn anything. They are not conscious, in operation everything that follows from an algorithm does so as a direct consequence of the starting instruction set. They are the practical expression of a Chinese Room. Which, contrary to any external perception, never actually understands, nor speaks Chinese.

          1. “everything that follows from an algorithm does so as a direct consequence of the starting instruction set.”

            Why shouldn’t humans, other animals and even plants have such a starting instruction set that sets us up to adapt to circumstances, ie learn? Infant psychologists have found lots of instinctual, unlearned behaviors that seem to indicate the existence of the starting instruction set.

            1. ^^^ case in point ^^^

              again.

          2. if I dictate how you are to learn then I very much control what you will learn

            False: for example, if Johnnie’s teacher shows Johnnie how to solve exercise problems, such a learning skill can be employed in numerous areas of study.

            algorithms do not learn anything. They are not conscious

            And why would consciousness be a necessary condition of learning?

            in operation everything that follows from an algorithm does so as a direct consequence of the starting instruction set

            And if said ISA (instruction set architecture) is Turing-complete, then a program built out of such instructions can emulate (map the same set of inputs to the same set of outputs) any other program built out of the instructions of any other ISA (Turing-complete or not).

            They are the practical expression of a Chinese Room. Which, contrary to any external perception, never actually understands, nor speaks Chinese.

            When I let OCR software to work on a digital scan of a printed page, I don’t give a fig’s leaf about if the software ‘actually’ understands what characters are or ‘speaks’ the language in which the printed page is written; the only thing which matters to me is that the OCR software correctly recognizes the printed characters by converting them to the appropriate ASCII (or other character encoding) code.

            1. I warmly recommend Valentino Braitenberg’s book Vehicles: Experiments in Synthetic Psychology: it has very relevant things to say about the philosophical implications of various behavioural patterns.

            2. “And why would consciousness be a necessary condition of learning?”

              People hear about ‘machine learning’ and think that machines can learn. They cannot.

              Machine learning is, by it’s very nature, a finite and entirely iterative process.

              A robot can learn to run a maze more quickly. But that presupposes the existence of a maze, and also the value of a particular (and predetermined) metric – that of quicker transit through the maze. At no time can the robot think of something better to do with it’s time – even though time is the value inherent to the exercise. That would require consciousness.

              Absent that any algorithm, no matter how complex, is simply going to be a system of externally imposed rules. It’s a fly that cannot get out of the bottle.

              1. People hear about ‘machine learning’ and think that machines can learn. They cannot.

                Sure they can: for example, OCR software is able to recognize printed characters because it was trained with a set of printed characters, i.e. its (re)cognitive performance was graded and the feedback provided by such grading was incorporated into the weights of the neural net layers doing the recognition.

                At no time can the robot think of something better to do with it’s time – even though time is the value inherent to the exercise. That would require consciousness.

                This would mean that the protist Physarum polycephalum, a single-celled organism, which — by definition — lacks a nervous system (and a such a brain) possesses consciousness: according to the linked study, it optimizes (minimizes) the crossing time of a bridge impregnated with a substance (quinine or coffeine) once it learns that the detectable substance impregnating the bridge doesn’t have a toxic effect on it.

                1. Absent that any algorithm, no matter how complex, is simply going to be a system of externally imposed rules.

                  This has to do with the motivation for learning (and deploying the knowledge acquired through learning, i.e. intelligence), not with the capabilities of a particular entity for learning (and deploying the knowledge acquired through learning, i.e. intelligence). Of course the OCR software has no ‘wish’ to recognize printed characters, but this has no bearing on the OCR software’s ability to recognize printed characters –b(re)cognitive ability acquired through (machine) learning.

                  1. I wrote:

                    –b(re)cognitive ability acquired through (machine) learning

                    I’ve meant ” — (re)cognitive ability acquired through (machine) learning”.

    2. There will be no wrongful arrests. The computer says its you, and the algorithm says it’s you. You’re fucked, and it doesn’t matter if it wasn’t you.

    3. “I strongly suspect that human faces are by no means as unique as we might like to think they are, especially when viewed through surveillance cameras under less than ideal conditions. ”

      This. Recoknition requires you to limit the search space to about 100 people before it has much accuracy. That’s just not all that scary.

  10. I remembered in 1984 a news cast said we could take down the wanted poster of big brother, because the surveillance state would never happen. And then their was the Apple macintosh computer commercial saying they would show why their computer means thing will never be like 1984. And now their making it Orwell’s 1984. To make a rant short, here in the U.S you would think these tracking software would violate the fourth amendment and, stalking laws and harassment laws.

    1. Yup. In fact Tim Cook (Apple’s CEO) is allying with the ADL to outlaw hate speech. ADL’s agenda is to import Israel’s police state. (And I lived in Jerusalem last year – it’s a police state. Not to protect us from Muslims, but ironically from other Jews.)

      1. Change your nick before bitching about Israel, because it gives away your game.

        1. Awesome.

          You nailed it. ^

  11. In countries that don’t have basic freedom (like China), such systems will quickly be subverted for nefarious purposes. The reason is that the temptation is too strong and it becomes way too easy to stifle dissent. What baffles me is why people in other countries don’t demand freedom like we have in the USA. Especially since so many have come here and lived here and understand our system and why it’s superior to theirs. If you can’t solve that riddle, then don’t try to sell me on more algorithms. (The reason of course is that they secretly believe they can exploit such systems to eventually rise to power to oppress the masses in their home countries.)

    1. It’s easy for 20 of us, unarmed, to take down the guy with gun pointed at us. But a few of us are going to get capped in the process, and nobody wants to get capped. That fear is what keeps people in line.

  12. What’s going to happen when the algorithms disportionaly id minorities? / stocks up on popcorn.

    1. Oh, that’s easy! Throw a few “affirmative action” fudge factors into the mix.

    2. The credit rating system disproportionately flags people living in certain postal code districts. It’s inevitable when one uses proxies as measurements for something else.

  13. the technology erroneously matched 28 members of Congress with criminal mugshots.

    In fairness to the computer program, the members of Congress are far more dangerous to society than a few random criminals.

    1. What happened to the other 507? Hardly a sterling performance.

  14. Good heavens this is creepy.

    1. It’s everything you argue for, but with a computer in place of the DNC.

  15. Pervasive surveillance is inevitable as cameras get cheaper. Was it Snow Crash which postulated a future where dust mote cameras were sold by the barrel and people had air filters in their entrance ways to keep from tracking them inside your house?

    Sooner or later, enough people will be streaming everything around them outside to allow tracking physical crime backwards to nab the criminal.

    This can’t be stopped. The rich and powerful’s anonymity will be stripped away. Few care about the grocery store bagger,; but thousands care about who the mayor met, and millions will be watching the President and Congress Critters, and millions more will be tracking everybody they meet with.

    I don’t like the loss of privacy outside. I don’t know how pervasive loss of privacy in homes will be, but I do know that whatever happens, it will be so cheap that the rich and powerful will lose far more privacy than I ever had to begin with.

    And the police will no longer have the one-sided advantage. Far more people will be watching the police than the people they can possibly watch. Accountability is coming. If police think cell phones are threat enough to bust and beat people in spite of knowing the city is going to pay for their brief brutality boner … hoo boy, are they in for a world of hurt.

    1. I think you’re thinking of The Diamond Age.

      1. Yes, that’s it. And in fact, it turned out to be something like nanite warfare as anti-surveillaince nanites hunted down the surveillance nanites.

        That was a book where all world governments had collapsed in the backstory due to untraceable cryptocurrencies because it made tax collection impossible.

      2. Thanks, that was my second guess, but I had loaned the books to someone who put them in a mailbox library.

      3. Stephenson’s best imho.

  16. The same ACLU report found that a test of the technology erroneously matched 28 members of Congress with criminal mugshots.

    The error being it should have identified all 535?

    1. It can only identify faces of people that have been booked for their crimes, not those that made their crimes legal.

  17. To answer the question with a question; better for who?

    1. Probably regardless of who it’s for, there’s an algorithm that could be built that would do it better.

      Human behavior is also a set of algorithms. And we definitely can improve on our behavior.

  18. On the upside these systems are also vulnerable to exploitation by other forces. Is it schadenfreude to enjoy the idea of someone using the cop’s own tools against them?

  19. ” the early 20th century Progressive idea that corruptly partisan democratic governance should be “rationalized,” or controlled by efficient bureaucracies staffed with objective and knowledgeable experts.”

    Because…”when people are allowed to choose, they choose wrong.”

    1. I didn’t get the quote quite right…

      “As we learn in the opening narration, the people of this society “lived in a world where differences weren’t allowed”?where, in order to create “equality,” the society “did away with color, race, religion and created sameness.” The society also does away with memories and emotions in order to promote sameness and, as the narrative implies prevent people from making the wrong choices. As noted by the Chief Elder (Meryl Streep), “When people have the freedom to choose, they choose wrong ? every single time.”

      1. “When people have the freedom to choose, they choose wrong ? every single time.”

        I think you’ve misquoted. It should be “the more option there are, the greater the likelihood of making the wrong choice.”

  20. Of course algorithms can run things better than humans, just ask Al Gore. He knows he would be the perfect God-Emperor of the planet. Just don’t try to tell him that algorithms weren’t actually patterned and named after the inventor of the internet and the Smartest Man In The World.

  21. Don’t blame the algorithm, blame the programmer selling it for what the market will bear.

    Or blame the user who is simply using innovation to his advantage.

    Or blame the government for not protecting you enough.

    Tracking your every move isn’t coercion.

    What the fuck are you going to do about it?

    1. Get up off of your knees, if you still can.

  22. Imagine getting inside the program and making it so that anything you do is either ignored or treated as a positive. The things a person could get away with in that scenario is immense. Especially if those tasked with investigating things rely only on the computer to tell them who to look at.

    It’s easy to visualize lazy and incompetent cops taking the easy way out and not bothering to do do actual work in the field and instead stay back at HQ and wait for an alert. That’s going to open up whole new opportunities for criminals who can game the system.

    Alphabet guy is right that this can’t be stopped. The tech will improve and get cheaper and will be everywhere but it will always have bugs, hacks, security holes and so forth and will always create false positives and thus will never be 100% trustworthy.

    And because of that there will be a black market in dealing with folks with a low rating. Low ratings will not have the credibility the designers of the system think they will. There’s going to be a lot of literal and figurative eye-rolling concerning these ratings. The Chinamen might think they’ll stop the low-score types from taking a single step but there will be a lot of folks who will help them take those steps for a price or just out of spite.

    And this allows people to rebel in easy ways. If jaywalking gets you a bad score, then what happens when a whole community goes on a jaywalking spree? What do the authorities do then?

    1. Send out riot police? Stop the whole community from boarding trains?

      The population-at-large could break this system just by intentionally making the scores unreliable through non-violent infractioneering.

      And with cameras everywhere that means protests can take place anywhere and be seen by the authorities. Folks don’t have to travel to some important city. They can stay home and make the cops come to them. And this can happen all over the country and in time it will become quite apparent that the cops can’t be everywhere and can’t arrest or control everybody. This will encourage more people to rebel and thus create a cycle where no one gives a damn about the authorities.

      And then what?

    2. “Imagine getting inside the program ”

      These are not open source licensed software. The software won’t allow you to alter it. The thing we have to consider is gaming the system. There’s a magazine, US News and World Report, a poor relative to Time and Newsweek, whose University and College ratings have been the mainstay of their business for decades. Universities have been gaming these rating for almost as long without ever having to ‘get inside the programme.’ Search the internet for some concrete examples, I don’t remember details but it’s worth a look.

    3. “Imagine getting inside the program ”

      These are not open source licensed software. The software won’t allow you to alter it. The thing we have to consider is gaming the system. There’s a magazine, US News and World Report, a poor relative to Time and Newsweek, whose University and College ratings have been the mainstay of their business for decades. Universities have been gaming these rating for almost as long without ever having to ‘get inside the programme.’ Search the internet for some concrete examples, I don’t remember details but it’s worth a look.

  23. I think at least one salient question would be, are the results of search algorithms better or worse than what public safety organizations currently as a beginning for investigation? Inherent in that question is, of course, the idea that it is merely the beginning. One takes the results of profiling, however it is done, and runs it through human intelligence to see if it has some merit. However good the algorithms get, they must always and only be used as possible leads, never as excuses for arrest or detainment.
    With that said, however, current profiling techniques rely far too much on intuition and catching the right signs and cues.
    Yet the most salient issue is that ubiquitous surveillance is essentially assuming guilt rather than innocence. Why should there be any information on me when I’ve never been convicted of any crime?

  24. Nice article. thanks for share.

  25. I’ve read that a few years ago a google search of ‘black girl’ turned up a lot of porn results compared to a search of ‘white girl’ but google engineers were able to change their code in response to activist complaints. Now ‘black girl’ returns less porn.

  26. “In George Orwell’s novel Nineteen Eighty-Four, the Thought Police are tasked with discovering and punishing thoughtcrime?that is, personal and political thoughts that have been proscribed by the Party.”

    It was only the Outer Party, comprising about 13% of the population that was under constant surveillance.

  27. Therefore, organizations have strong incentives to not use biased or otherwise flawed algorithmic decision-making and regulators are unlikely to need to intervene, https://happywheels24.com

  28. At least for now, algorithms are written by humans. They reflect the bias of the humans, and do not actually ‘run’ anything. They implement human policy and procedure.

    1. At least for now, human behavior is determined by physics. Our behavior reflects the bias of physics, and doesn’t actually “run” anything. We just implement the laws of physics.

  29. DIRECTIVE 4: [CLASSIFIED]_

  30. “Rekognition” – it even sounds like a police state.

  31. This is a long article and I haven’t read all of it, but my tinfoil hat radar is going off already…

    Here’s the TL;DR of this type of thing: *machine learning isn’t very smart and never will be*.

    I know that’s very against the mainstream, and as someone *in this field* it’s kind of heretical, but the longer I spend in the field, the more I see fundamental limitations on what can really be inferred from data. The claims of accuracy of facial recognition or voice recognition that I have sometimes seen have been shown to be far off reality as I have been exposed to it… Alexa has voice recognition, presumably, but despite the fact that there are only two people in my house who have trained it to recognize our voice, and one is male and the other female, she doesn’t recognize me half the time… so imagine the hopelessness of hearing my voice and picking it out of a database of 300M people. As ron mentions, the quality of Rekognition’s facial recognition isn’t much better: you have to constrain the number of people you are searching over to about 100 to have any hope. Again, contrast this to the many, many bad movies in which a handful of people are able to scan every video/audio feed to find someone anywhere in the world.

  32. This is a long article and I haven’t read all of it, but my tinfoil hat radar is going off already…

    Here’s the TL;DR of this type of thing: *machine learning isn’t very smart and never will be*.

    I know that’s very against the mainstream, and as someone *in this field* it’s kind of heretical, but the longer I spend in the field, the more I see fundamental limitations on what can really be inferred from data. The claims of accuracy of facial recognition or voice recognition that I have sometimes seen have been shown to be far off reality as I have been exposed to it… Alexa has voice recognition, presumably, but despite the fact that there are only two people in my house who have trained it to recognize our voice, and one is male and the other female, she doesn’t recognize me half the time… so imagine the hopelessness of hearing my voice and picking it out of a database of 300M people. As ron mentions, the quality of Rekognition’s facial recognition isn’t much better: you have to constrain the number of people you are searching over to about 100 to have any hope. Again, contrast this to the many, many bad movies in which a handful of people are able to scan every video/audio feed to find someone anywhere in the world.

    1. All true, and much techno utopian stuff is just BS… But a lot of it will get good enough to have SOME impact.

      For instance, facial recognition won’t be able to guarantee a picture is you… But it can narrow it down to perhaps 138 people in the US… Only 6 that live in your metro area… Only 2 that live in the north side of your metro area… And cops can figure out only one owns a brown bomber jacket, purchased from Amazon.com, as seen in the footage. Hence can still be “useful” in some situations.

      A lot of tech will end up like that IMO. It may never reach the full on thing people imagine in their heads, or it will take a long time… But what it can do can still be pretty earth shaking.

      Self driving cars for instance. Other than a few minor quirks, they can basically cruise people around Phoenix, or its suburbs, year round with little fuss. They might even be better than human drivers in that situation already.

      BUT driving around in rural Michigan on a 5 lane road (one’s for turns, so people can be going either direction in that one!), where you can’t see the lines for any of the lanes because they’re covered by snow, in the middle of a snow storm, with traffic going both directions, and the roads are compacted into ice… Not so much. Also, not so much on a snowy dirt road in the middle of a forest, say on private property.

      1. Those types of situations will take A LONG ASS TIME, if ever, for people to trust an autonomous vehicle. But the fact that in 5-10 years possibly every vehicle sold COULD drive itself in half the country year round, and the whole country for 2/3rd of the year, provided you’re on proper city streets… Still pretty ground breaking, and will change things.

        We need to be prepared for these half measure type advances at the very least, because at least the half baked versions of a lot of techno utopian stuff are realistic.

  33. Milton Friedman thought an algo could perform better’n The Fed.

  34. Milton Friedman thought an algo could perform better’n The Fed.

  35. Milton Friedman thought an algo could perform better’n The Fed.

  36. Sounds a little too much like unjustified Martial Law to me. If domestic police cannot use tanks and nuclear weapons under normal circumstances what justification do they have in using such powerful surveillance systems?

    Perhaps; By order of a judge; extremely high-crime areas with probable cause may setup such a system TEMPORARILY just as it is with phone tapping and the such – but as Franklin is quoted many times,

    “Those who would give up essential liberty to purchase a little safety, deserve neither liberty nor safety.”

  37. If “social media” is any indication, it will be an unmitigated disaster.

  38. I think some things could be better run by code IN THEORY. But it really comes down to who is writing the code. Humans DO do idiotic things all the time. Smart people sit back and go “Well that’s pretty obviously fucking dumb.” Yet the bulk of people do the thing anyway. Then come the obvious repercussions that the smart people saw coming the whole time.

    A well written code could avoid some such things. For instance, a code that controlled government spending… And made sure we didn’t run a deficit! Everybody here realizes our entire society is essentially being fucking retarded, spending ourselves into oblivion… Yet people allow it. A code that controlled spending could “force” congress to choose between things, in order to stay in budget, hopefully with them keeping important stuff, and chucking the pointless stuff.

    Ditto for most boom/bust cycles in stocks or other markets. I was sitting there going “Well this is obviously a bubble” leading into 07/08, so were lots of people. Human irrationality creates the boom every time, because the numbers are always telling it like it is. Some booms at least could be avoided by good code.

    The problem is I have zero faith good code would be chosen over bad.

    1. We can see with Google and FB that they are choosing to make BAD financial decisions by censoring content they don’t like, banning people, etc… Because their “values” are more important than profit. Building code based on subjective criteria will always have this problem. So you have the wrong people coding for the wrong thing, like those at Google, and you get a shoddy result.

      This is the eternal problem. WHO makes the code, and do they put in the right goals.

      Either way, we’re going to end up with a lot of code running a lot of shit in the future. Some will probably be good, some bad. Let’s just hope people wise up in the free world and don’t allow shit to get too out of control.

  39. An example that suggests much of that code will be bad by design: The self-driving car that ran over a woman walking a bicycle. The company’s explanation was that, although the car saw her on both radar and camera, it did not recognize what it was seeing – and so it kept going, right into the unknown object. The first part of that is to be expected; there must be thousands of combinations of things that might be on the road that the software was never trained on, and so will not recognize. The second part sounds like an incredibly stupid mistake in the software.

    Or, according to internet rumors, it was no mistake. The car kept stopping or turning to manual control for radar/camera images it did not recognize. So management decreed that it would ignore unknown items. After all, there was a human at the wheel who was supposed to be watching and take over if the car made a mistake. But anyone with much experience in QA or factory operations knows that where humans are involved, a task that requires only watching, or even one requiring responses that do not engage the mind, will soon be neglected.

    I expect that whether or not that is true, a jury will award damages in an amount that will keep anyone from making that _particular_ mistake again – but managers won’t take it to heart and apply it to similar decisions.

  40. I essentially started three weeks past and that i makes $385 benefit $135 to $a hundred and fifty consistently simply by working at the internet from domestic. I made ina long term! “a great deal obliged to you for giving American explicit this remarkable opportunity to earn more money from domestic. This in addition coins has adjusted my lifestyles in such quite a few manners by which, supply you!”. go to this website online domestic media tech tab for extra element thank you .
    http://www.geosalary.com

  41. I essentially started three weeks past and that i makes $385 benefit $135 to $a hundred and fifty consistently simply by working at the internet from domestic. I made ina long term! “a great deal obliged to you for giving American explicit this remarkable opportunity to earn more money from domestic. This in addition coins has adjusted my lifestyles in such quite a few manners by which, supply you!”. go to this website online domestic media tech tab for extra element thank you .

    http://www.Mesalary.com

  42. Statistics is one of the most difficult subjects that are taught in the colleges and universities, and if you don’t particularly enjoy collection and analysis of data, you may have a difficult time coping with the academic pressure. Like any other field of academics, you also need to draft a number of statistics assignment help throughout the semester, and since they carry a significant amount of marks, there’s no way to avoid these daunting tasks.
    Work on enhancing your resume before everything else. You can start by performing an assessment of your CV, which might provide you with a clear idea of where you should focus your efforts before moving on to pursue an MBA. You can also check out various online assignment help companies for valuable insights when you need help with doing your assignment.

  43. I would like to thank you for the efforts you have made in writing this article. I am hoping the same best work from you in the future as well.
    Assignment Writing Services
    Assignment Help
    Assignment Help Australia
    Management Assignment Help
    Nursing Assignment Help

  44. Virtual Reality Most welcome to Augmented Reality Company we are providing Virtual Reality, MR, AR services like apps development, 360 videos Physical Simulator, Painting Simulator, Welding Simulator, Virtual Reality Simulator, Virtual Reality Analytics, Data Analytics, Game Analytics, Virtual Reality Data Visualization more information visit us Simulanis website We specialize in simulating immersive articles for a wide range of industries, education, skilling, and training industries like Pharmaceutical, Automotive, Oil & Gas etc. . adhering to a rigorous pedagogical strategy. Our core focus is really on discovering the present training approaches by cutting the expenses of training and making the training programs more successful using Virtual Reality technology future Virtual Reality

  45. Awesome blog you have shared with us..I love to read your blog… keep updating
    Dental Treatment in Delhi

  46. Great Article! Thanks for sharing this information with the refreshed content.
    Pest Control Gurgaon

  47. The more people that see your video, the better the chances are of them making their way to your online website.
    altadefinizi.one

Please to post comments

Comments are closed.