How To Survive a Robot Uprising
Seeing dark omens of catastrophe in new tech demos.
Rise of the Robots: Technology and the Threat of a Jobless Future, by Martin Ford, Basic Books, 352 pages, $28.99
Martin Ford, author of Rise of the Robots, doesn't like the recent increase in U.S. wage inequality. So he wants to tax the rich more, to fund a basic income guarantee for the poor. (But only the U.S. poor. Other poor don't seem to concern him.)
Maybe you think you've heard this story before. But Ford, a software engineer and businessman, doesn't argue that inequality is unethical or that it will destroy democracy. He instead argues that inequality will soon get much worse, so bad that most adults won't be able to find jobs. So bad the economy will descend into "catastrophe." And all because of robots.
Now, Ford wants to reassure you that he isn't crazy. He isn't one of those people who see robots with human-level intelligence coming soon and superintelligent terminators killing us all soon after. No, Ford just thinks that dumb robots specialized for particular jobs are quite enough reason to panic.
In the old days, if you wanted to scare people into action via fear of a coming catastrophe, you could point to most anything unusual as an omen: an eclipse, a sighting of a strange animal, a king dying young, perhaps even a new strain of music becoming popular. It helped if your coming catastrophe was something, like a flood or war, that everyone knew would come eventually-that it was a matter of when, not if.
Today, we know more about how the world works, so fearmongers can't just point to any aberration as an omen. But Ford's fears are thoroughly modern: all those new computer-based gadgets. Such things spook many people today, because super-robots come from a realm of futurist speculation that has landed with a plausible plop into the world we live in. A whole intellectual industry has sprung up to treat computer demos as dark omens.
Ford is correct that, like floods or wars, super-robots are likely to arrive eventually. That is, if our automation technologies continue to improve, it is plausible that in the long run, robots will eventually get good enough to take pretty much all jobs.
But why should we think something like that is about to happen, big and fast, now? After all, we've seen jobs replaced by automation for centuries. Sure, there have been fluctuations in which kinds of jobs are more valued and which are most vulnerable to automation. Wage inequality has also varied. But why shouldn't we just expect these things to stay within roughly the same range of variation we've seen in the past? Workers found new jobs before, and the economy never imploded because of automation; more like the opposite.
Many have cried this wolf before. This isn't the first time people have been so impressed with new tools that they've warned machines may soon make us replaceable. Ford admits this, and pointing out how in the 1960s such people were top academics who attracted big press. In the 1980s, I was personally caught up in a similar wave of concern; I left physics graduate school to start a nine-year career researching artificial intelligence (A.I.).
Like many others today, Ford says this time really is different. He gives four reasons.
First, there is a 2013 paper by Carl Frey and Michael Osborne, an engineer and an economist at Oxford University, estimating that 47 percent of U.S. jobs are at high risk of being automated "perhaps over the next decade or two." Ford likes this paper so much that he mentions it in three different chapters. Yet this 47 percent figure comes mainly from the authors "subjectively" (their word) labeling 30 particular kinds of jobs as automatable and 40 as not. They give almost no justification or explanation for how they chose these labels. Such a made-up figure hardly seems a sufficient basis for expecting catastrophe.
Second, Ford thinks recent labor market trends are ominous. In the U.S., median wages have been stagnant and wage variance has increased since about 1970, while the labor share of income, the fraction of adults who work, and the wage premium for college graduates have all fallen since about 2000. Ford sees automation as the main cause of all these trends, but he admits that economists reasonably see other causes, such as changes in demographics, regulation, worker values, organization practices, and other technologies.
Third, Ford notes that the rapid rate at which computer hardware prices fall could let computers quickly displace many jobs, if we reach a threshold where many jobs all require roughly the same computing power. But while computer prices have been falling dramatically for 70 years, the job-displacement rate has held pretty steady. This suggests that jobs vary greatly in the computing power required to displace them and that jobs are spread out rather evenly along this parameter. We have no particular reason to think that, contrary to prior experience, a big clump of displaceable jobs lies near ahead.
And then there is Ford's fourth reason: all the impressive computing demos he has seen lately. This is where his heart seems to lie. He devotes far more space describing things like Google's self-driving cars and language translators, IBM's Jeopardy champion Watson, Baxter's flexibly programmable robots, and Narrative Science's software for writing news articles than explicating reasons one through three. Only rarely does Ford air any suspicions that such promoters exaggerate the rate of change or the breadth of the impact their new systems will have. (He is somewhat skeptical about the market for 3D printing and about prospects that self-driving cars will increase road throughput soon.) And of course several generations have seen A.I. demos with just as impressive advances over previous systems.
So basically, Ford sees a robotic catastrophe coming soon because he sees disturbing signs of the times: inequality, job loss, and so many impressive demos. It's as if he can feel it in his bones: Dark things are coming! We know robots will eventually take most jobs, so this must be now.
If a big burst of automation takes most but not all jobs, won't those who lose jobs to robots switch to doing jobs that robots can't yet do? After all, this is what we've seen for centuries, and it is the straightforward prediction of labor economics. But Ford says no, new firms like Google and Facebook have few employees relative to sales. As if Google's experience were some sort of universal law, Ford says, "Emerging industries will rarely, if ever, be highly labor intensive." Yet even if this turns out to be true, Ford doesn't explain why old industries can't hire more workers.
Moreover, even if workers could find new jobs, Ford still sees catastrophe if new jobs don't pay as much, increasing wage inequality. The economy will "implode," he says, because the rich just don't spend enough: "A single very wealthy person may buy a very nice car…But he or she is not going to buy thousands of automobiles…The wealthy spend a smaller fraction of their income than the middle class." Ford admits that increasing inequality since 1970 hasn't hurt spending, but he attributes this to increasing debt that can't last. (Yet that debt increase is small compared to the increased inequality.) He ignores the fact that the world economy had increasing wage inequality for centuries without imploding. Worldwide inequality has decreased only recently.
Ford eventually admits that "the global economic system" might "adapt to the new reality" via "new industries producing high-value products and services geared exclusively toward a super-wealthy elite." He calls this "the most frightening scenario of all," comparing it to the dystopian 2013 movie Elysium. In the end, it seems that Martin Ford's main issue really is that he dislikes the increase in inequality and wants more taxes to fund a basic income guarantee. All that stuff about robots is a distraction.
After all, there isn't a fundamental connection between automation and wage inequality; in past eras more automation was associated with less inequality. If there's a connection now, it may be temporary and change again. More important, if we want to increase transfers because we dislike inequality, we don't need to discuss robots at all. It wouldn't matter why inequality is high; we'd just increase transfers when we saw more inequality than we liked. Or set up a system, like a basic income guarantee, to do this automatically.
So why didn't Ford just say this straight out? Perhaps because many others have already taken that direct route, but with limited success. It seems most people just aren't very bothered by current levels of inequality. So they need to be scared with something else.
If I'm not persuaded by Ford's omens, what would persuade me? Well, I take betting odds seriously. Since automation might reduce employment, I've expressed my skepticism about big automation progress soon by betting $1,200 at 12–1 odds that the Bureau of Labor Statistics' measurement of the labor fraction of U.S. income won't go below 40 percent by 2025. And since better computer software should increase the demand for computer hardware, I've bet $1,000 at 20–1 odds that computers and electronics hardware won't be over 5 percent of U.S. GDP by 2025. That's just me, of course, but more and bigger bets like these could tell us what people think when they are willing to put their money where their mouths are. It wouldn't cost that much to create prediction markets with prices that estimate these and a great many other important future events, estimates that are at least as reliable as those from any other public source.
I'd also like to see a time series of the rates at which jobs were displaced by automation in the past. If this rate were unusually high and rising, that would be an omen worth noticing. But if it's too hard to say which past jobs were lost to automation, what hope could we have of predicting which future jobs will be so lost?
Finally, trends in the rates of progress in robotic research are worthy of study. When I meet experienced artificial intelligence researchers informally, I often ask how much progress they have seen in their specific A.I. subfield in the last 20 years. A typical answer is about 5 to 10 percent of the progress required to achieve human-level A.I., though some say less than 1 percent and a few say that human abilities have already been exceeded. They also typically say they've seen no noticeable acceleration over this period.
If a more sustained study bears out those informal answers—and if that rate of progress persists—it would take two to four centuries for many A.I. subfields to (on average) reach human-level abilities. Since there would be variation across subfields, and since achieving a human-level A.I. probably requires human-level abilities in most subfields, a broadly capable human-level A.I. should take even longer than two to four centuries to emerge. Furthermore, computer hardware gains have been slowing lately, and we have good reason to think this will cause software gains to slow as well.
Perhaps my small informal survey is misleading for some reason; bigger, more systematic surveys would be useful, as well as more thoughtful analyses of them. We do expect automation to take most jobs eventually, so we should work to better track the situation. But for now, Ford's reading of the omens seems to me little better than fortunetelling with entrails or tarot cards.
This article originally appeared in print under the headline "How to Survive a Robot Uprising."
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
“But if it’s too hard to say which past jobs were lost to automation, what hope could we have of predicting which future jobs will be so lost?”
I not only have “hope” of doing so, I’ve already done it:
Of the top 15 most common jobs in the U.S. in 2012, I’ve predicted that employment in those 15 job categories will decline by about 50% (adjusted for population growth) by 2044:
http://markbahner.typepad.com/…..art-2.html
Particularly vulnerable are:
Retail salespeople, cashiers, tractor trailer drivers, material movers (e.g. loading dock workers).
Mark Bahner|3.4.15 @ 1:59PM|#
“I not only have “hope” of doing so, I’ve already done it:”
I see you’re making all sorts of predictions.
Now I’d like to see some results. Please post some predictions that have turned out to be true and which:
1) Are not trivial
2) Had origination and ‘prove’ dates
3) Had measurable values.
That’s from a week ago.
Damn.
So did his predictions come true??
“Now I’d like to see some results. Please post some predictions that have turned out to be true and which:
1) Are not trivial
2) Had origination and ‘prove’ dates
3) Had measurable values.”
I predicted the 1969 Mets over the Orioles.
But I’m sure you’ve heard of “past performance is no indication of future results.”
So…you think my predictions are wrong about what jobs will decline and by how much? Well, make your own predictions! In fact, make them on my blog, and we’ll have a record of who knows what they’re talking about, and who doesn’t.
My last pay check was $9500 working 12 hours a week online. My sisters friend has been averaging 15k for months now and she works about 20 hours a week. I can’t believe how easy it was once I tried it out. This is what I do
http://www.work-mill.com
Of course I meant *accurately* predicting; it is trivial to predict anything if accuracy isn’t an issue.
I’d be happy to bet my predictions are more accurate than yours.
“…it would take two to four centuries for many A.I. subfields to (on average) reach human-level abilities. Since there would be variation across subfields, and since achieving a human-level A.I. probably requires human-level abilities in most subfields, a broadly capable human-level A.I. should take even longer than two to four centuries to emerge.”
This is wildly out of sync with the predictions of virtually all experts on human-level A.I. My prediction is that this assessment will be viewed in the year 2050 as if someone had predicted in 1870 that powered flight was “two to four centuries away.”
This website contains the results of recent surveys on the subject of the timing for human-level AI:
http://lesswrong.com/lw/l0o/su…..asting_ai/
A recent set of surveys of AI researchers produced the following median dates:
?for human-level AI with 10% probability: 2022
?for human-level AI with 50% probability: 2040
?for human-level AI with 90% probability: 2075
Those assessment look good to me. (Note that “human-level AI” is a bit of a tricky thing to assess.)
You think it significant that people who want funding for AI research think it will pay off Real Soon Now? What are climates scientists predicting about temperature changes over the next century, and have cold fusion types become pessimistic all of a sudden?
Also from your link:
“When Will AI Be Created?
1. Predicting when human-level AI will arrive is hard.”
Ray Kurzweil has already bet (on Long Bets) that a computer will pass the Turing Test by 2029. Presumably, Robin Hanson says the likely date is more like somewhere between 2229 and 2429.
I don’t think it’s at all a hard call to predict which one is likely to be closer to the actual date of a computer passing the Turing Test. Ray Kurzweil will be much, much closer to the actual date.
It’s also useful to note that human level AI will be one of the last things human beings themselves invent. After that, the speed of evolution should be thousands of times faster than it is today.
I more trust expert descriptions of what they have seen in the past in their subfield, relative to their forecasts for future progress in fields where they are not expert. See more: http://www.overcomingbias.com/…..imate.html
“I more trust expert descriptions of what they have seen in the past in their subfield, relative to their forecasts for future progress in fields where they are not expert.”
These are not “forecasts for future progress in fields where they are not expert.”
The website to which I linked had forecasts by the “top 100 AI researchers” (in terms of citations by others). Their predictions essentially match the table I presented above (i.e. 50% probability by 2040, 90% probability by 2075).
What you trust are your own forecasts over the “crowd wisdom” forecasts of literally more than 100 top experts in the AI field. That ought to cause you to re-evaluate your own forecasts. I think it will be less than 20 years before the experts will be shown to be generally right, and your forecasts way, way, (waaaayyy) off.
Totally compatible with all other BuckyBall sets!Each set comes with a little carrying case too.each buckyball is 5mm in diametercube: 1 ?” high x 1 ?” wide x 1 ?” diameterFor adults only. ? These are so super strong, they should be kept away from children.
What about my robots? Should I keep them away from my robots?
What about my orphans? For Christ sake, if they can mine diamonds for their gruel, I’m sure they are smart enough to not choke on Buckyballs. I’m sure any that do will be well within the margin of the calculated acceptable losses.
Have you seen the surgery that they have to do if you swallow 2 of those a few hours apart? It’s pretty nasty.
Surgery? On orphans? Please!
They explode if you don’t. Seriously. Shit explodes everywhere.
I don’t think I could eat 2 orphans, after all that mining they are gonna be pretty tough no matter how slowly you cook ’em.
Pressure cooker. 15PSI , 6 hours.
Yes. They’ll interfere with the robots inhibition unit and may cause it to start singing folk songs.
The world has never lacked for Malthusians, no matter that not a single one has ever made a valid prediction.
The entire animal world has always been Malthusian, as have almost all humans before three hundred years ago.
So every other time in human history when technology advanced rapidly it lead to the average person being significantly better off than before. But this time it’s different right? Just like it always is. Fucking Luddites.
It’s not tech, but if lefties have their way, Bahner’s predictions of massive unemployment might come true:
“Minimum wage hike hurts Oakland Chinatown”
http://www.sfchronicle.com/bay…..ate-result
Most of it is behind a pay-wall, but the paper version claims four restaurants and six groceries have bit the dust.
Market Failure!
“It’s not tech, but if lefties have their way, Bahner’s predictions of massive unemployment might come true:…”
Where did I predict “massive unemployment”? I predicted that U.S. employment in what were the top 15 occupations in 2012 will decrease by approximately 50% (adjusted for population) by 2044.
That prediction contains nothing about unemployment…let alone “massive unemployment” (whatever that means).
http://nypost.com/2015/03/14/o…..l-scandal/
Lol
Somebody posted that a few articles down.
I’m not sure how I feel about it. It makes her look almost…. sympathetic. Like she’s a victim of the big bad government.
“It makes her look almost…. sympathetic.”
Your eyes; there’s something wrong with them.
“they’re spreading rumors that I’ve been with women, that Hillary promoted people at the State Department who’d done favors for our foundation, that John Kerry had to clean up diplomatic messes Hillary left behind”
‘rumors’.
(let that marinate for a moment)
i.e. ‘all true’
I believe that she left behind a diplomatic mess. I do not however, believe that John Kerry cleaned anything up.
He’s swirling it around trying to look busy while he waits for someone who knows what to do to show up.
*checks to see if sexbots have been created*
*goes back to not giving a shit*
TEH RHOOMBA TOOKS YER JARBS!!
…
lou dobbs warned everyone, ‘it sounded like a spanish name to him’, but nooooooooo, did people listen?
But what of the carriage makers Mr Ford (the other one)?
What these Luddites never take into consideration is that new technology also creates a shit-ton of opportunity in support of it.
ALSO, when robots take over, the price of goods and services will plummet, making shit we use EXTREMELY affordable, requiring less work to obtain basic necessities. Street mimes will be able to afford the Cadillac plan. These dire predictions are based on the work paradigm remaining the same. It won’t.
An excellent point. Used to be only the super rich had indoor chamber pots. Now only diehard hermits don’t have hot and cold running water.
112 years ago, nobody could fly. Now it’s so damned cheap that only those same hermits can’t afford it, and almost everybody else can zoom around the world in a matter of days (but most have no need to).
Personal transportation used to be relegated to the well-off or those who used horses in their work. Now every adult can afford some kind of car.
Telegrams used to be hideously expensive, and only brokers and other filthy rich had their own home telegraph. Now any schmoe who wants to can look up stock prices on a pocket phone.
Their are many peoples who’s jobs could be automated with a simple program. Lots of office workers who move and collect data. One day a manger with a programming background will see what people are doing and put them all out of a job.
“Alch|2015/03/15 12:55:10|#5155913
Their are many peoples who’s jobs could be automated with a simple program”
For instance = if we install a thing called “Spell Check” onto every computer? We can stop wasting money teaching youth basic familiarity with the English language. The Grammar Nazis will be undone! Liberation for all! VIVA LA… RHOOMBA!!
Think of the dictionary authors Mr Gates.
Was telling my wife the other day, that I was going to throw my dictionary out. But she reminded me that I may need to write letters again when all powered machines fail as a result of the great solar storm of 2027.
Why? Nobody else will know how to spell either.
E-T-H-E-R.
“I may need to write letters again when all powered machines fail as a result of the great solar storm of 2027”
Write on what? Clearly you’ll need to burn all your paper (after the wood furniture is gone) just to keep warm.
I hope you reached your smug limit for the day. Sorry I went to public school.
“I contain multitudes [of smug]”
Anonbot Took Ur Jobz
That manager will TRY to do that, but will get bogged down by a series of stakeholder meetings and requirements-gathering sessions, and by the time a project plan is devised the company will be sold and the manager will be RIF’d, and the next manager will start the process over.
This process will continue for the next century.
Just for the record, I’m an automation engineer, and people’s specific jobs disappear at a pretty steady rate in the US industrial sector. Generally speaking it’s slow enough that the person doesn’t get fired, but moved over to another job that a plant expansion created. It’s why for decades the US industrial output has increased, but overall manufacturing jobs are stagnant.
However, over the long term the percentage of jobs relative to the overall population in manufacturing has declined. From my point of view, I see both sides of this argument. I agree that historically this hasn’t been an issue. On the other hand I notice that technological cycle times have been steadily decreasing, resulting in less time for people to adapt to the change.
If autonomous driving were to mature rapidly, there would be significant economic disruption. It would almost certainly be temporary (5 to 10 years), but it’s ridiculous to ignore he possibility that it would be significant..
Will robots recycle posts in our post-prosperity future?
Will they have the decency to remove old comments first?
Naah.
They’ll leave ’em, and I’ll trip over ’em.
I’m convinced there is already a socialist bot that can out prog amsoc
Speaking of tech, Anti-Gun Activists Want Obama to Supply Law Enforcers with “Smart” Guns
And the Two-Minutes Hate that breaks out in the comments:
The modern Englishman, ladies and gentlemen. A slave down to his very bones.
Who needs edible food?!
One I left out because WORD LIMIT:
“HHeLiBe 5h ago
Unless this is mandated the gun nutters will view it as impinging on their rights.”
?
So if it’s not “mandated”( therefore voluntary) it will impinge people’s rights. But if people are forced then it doesn’t impinge their rights?
First they’re going to need to provide these people free diapers.
Srebrenica
I’d love to see some gun-control outfit push this on cops – you must carry the UltraSafe MaybeFire WidgetGun to increase public safety.
The rank and file cops would spit all over these idiots, but the chiefs who are in bed with them would be in for an ugly divorce.
I’m late to the party, but all of the “mandate” proposals for “smart guns” (translation: guns that don’t work) specifically exempt law enforcement.
Just like magazine capacity limits.
I’m going to show you how I make a living online! Here is a company that will pay you $100 if you don’t make money in 24 hours. Take a look this company has an A+ Business Bureau Rating == == http://www.MoneyKin.Com
The problem with projections like Bahner’s is that it looks at individual job functions in isolation.
You may be able to devise a way to automate a particular activity, but unless that activity already occurs in a highly regimented and stratified environment, implementation will be incredibly difficult.
To illustrate this, consider the difference between implementing bar-code scanners in supermarkets and electronic health records in emergency rooms.
Supermarket workflow was already incredibly regimented when the first bar-code scanner was introduced. Groceries were placed on a conveyor belt by customers, and cashiers picked items up one at a time, read the price, entered the price, and put the item back down on to another conveyor belt. This was incredibly easy to automate because all you had to do was create a machine to replace the function(s) “read and enter price” and then insert that machine into a regimented workflow waiting to receive the machine.
(continued)
But electronic health records in emergency rooms are an entirely different matter. Although there are common hospital practices, no two hospitals have workflows that are exactly the same. So although this problem looked as easy to automate as grocery barcode scanners, in practice it has been incredibly difficult to implement – so much so that even after billions of dollars in government subsidy money has been spent, EHR has still not been completely implemented. And – perhaps more importantly – implementation has been so difficult that the number of jobs related to this activity has probably risen, and not fallen. Lots of IT people eating salaries trying to implement solutions in this area, and that will probably be the case for what amounts to forever.
My experience in IT has been that, society-wide, we simply don’t have enough competent management – and enough management continuity – to accomplish widespread implementation of highly complex automation. The number of problems that will be as difficult to accomplish as EHR vastly exceeds the number of problems that will be as easy as barcode scanners. And the process of even attempting that automation will employ vast numbers of people.
(continued)
We will almost certainly have an extended period of time where we try, over and over, to automate certain areas of the economy – but never quite get there. And the incremental improvements we make will cause some job losses, but those job losses will be offset by a vast number of jobs connected to the Sisyphean task of achieving and maintaining those incremental improvements.
“but those job losses will be offset by a vast number of jobs connected to the Sisyphean task of achieving and maintaining those incremental improvements.”
I get your point, but I think you should question looking into whether will be employed in jobs to implement automation. Jobs mean money, and this work you speak of can be handed off to consumers/users who are expected to do it for free. Think of self-serve gas stations, the automated bank machine, or the check-in/passport control at the airport. Automation occurs here because the extra work load is foisted on the user. The unpaid user.
“betting $1,200 at 12?1 odds that the Bureau of Labor Statistics’ measurement of the labor fraction of U.S. income won’t go below 40 percent by 2025.”
Not that I expect you’d get much interest, but for these purposes, a better bet may be on the share of income going to some lower quantiles of labor earners. (though maybe you’d need 1 data source)
Sure, if anyone was interested.
I predict Etsy ?type ventures will eventually be the Rodeo Drive of the future. People will pay a fortune ( more than we do now) for something humanly made. What comes around, goes around- most times. Too bad, none of us will ever know for sure.
Start a new lucrative career. Our firm is looking for 10 people to represent our services?.You will have business coming to you on a daily basis.Check Here Don’t Miss Golden Chance open this site
………………… http://WWW.JOBS-FASHION.COM
As a person that operates a nondestructive inspection automated robotic system, I’m not too worried about future automation, because, are they going to make robot repairmen? These high tech machines don’t always work like they are supposed to. And since I’m inspecting aircraft parts, does anyone want a robot to take pass or fail responcibility for the integrity of the part? Probably not.
Ironically, the 20 yr old beat to hell system I use is still better than the newer machines we got a couple of years ago.
“are they going to make robot repairmen? ”
Quaint idea. Never heard of disposable robots?
Mr. Hanson insightfully, but probably unwittingly, offers the key that unlocks the solution to this capital-labor substitution inequality problem. He plans to earn money by assuming the risks associated with the state of the future by betting on prediction markets. Mr. Hanson bets robots won’t take over the future, and if true he will make money that will increase his wealth, no labor expended.
What matters is not that robots replace human labor – that would be a good thing, no? – but WHO owns the robots. If one invests in robot technology by taking equity risks on an uncertain future, one can lay claim to the productivity of those robots.
The world is full of risks of loss from change, whether that risk be climate change or technological change. But to fully participate in a market economy one must manage those risks successfully and get PAID for them. This is what our obsolete capital-labor model misses in thinking that we are paid solely for labor productivity. Profits are a return to risk capital and that’s how we should seek to distribute the gains to success in capitalist enterprise.
Me? I’m investing in biotechnology and robots, just like I invested in computers with human and financial capital. Some of these investments go belly up, but that’s the reality of uncertainty.
“if that rate of progress persists” I think that’s a fairly significant premise. Progress rates can be highly susceptible to sudden changes, for a number of reasons. But your analysis is thoughtful and interesting.
Plenty of good things come from war. The silly whining about pro-war, anti-war messages needs to run its course. War in and of itself has no moral agency, the belief that it does has to be one of the most rampant animistic beliefs of the last 50 years.
Wow, this is the website for Crackers who can do complete (if cliched) sentences! I am so impressed.
Particularly by the cite of the NY Post, and by the argument that went on and on but had very little to do with the book.