Ebook Superintelligence Paths Dangers Strategies Download

Read Superintelligence Paths Dangers Strategies

To these distinctive capabilities that our species owes its dominant position If machine brains surpassed human brains in general intelligence then this new superintelligence could become extremely powerful possibly beyond our control As the fate of the gorillas now depends on humans than on the species itself so would. As a software developer I ve cared very little for artificial intelligence AI in the past My programs which I develop professionally have nothing to do with the subject They re dumb as can be and only following strict orders that is rather simple algorithms Privately I wrote a few AI test programs with or less success and read a articles in blogs or magazines with or less interest By and large I considered AI as not being relevant for meIn March 2016 AlphaGo was introduced This was the first Go program capable of defeating a champion in this game Shortly after that in December 2017 Alpha Zero entered the stage Roughly speaking this machine is capable of teaching itself games after being told the rules Within a day Alpha Zero developed superhuman level of play for Go Chess and Shogi all by itself if you can believe the developers The algorithm used in this machine is very abstract and can probably be used for all games of this kind The amazing thing for me was how fast the AI development progressesThis book is not all about AI It s about superintelligence SI An SI can be thought of some entity which is far superior to human intelligence in all or almost all cognitive abilities To paraphrase Lincoln You can outsmart some of the people all of the time and you can outsmart all of the people some of the time but you can t outsmart all of the people all of the time unless you are a superintelligence The subtitle of the English edition paths dangers strategies has been chosen wisely What steps can been taken to build an SI what are the dangers of introducing an SI and how can one ensure that these dangers and risks are eliminated or at least scaled down to an acceptable levelAn SI does not necessarily have to exist in a computer The author is also co founder of the World Transhumanist Association Therefore transhumanist ideas are included in the book albeit in a minor role An SI can theoretically be build by using genetic selection of embryos ie breeding Genetic research would probably soon be ready to provide the appropriate technologies For me a scary thought something which touches my personal taboos Not completely outlandish but still with a big ethical uestion mark for me seems to be Whole Brain Emulation WBE Here the brain of a human being precisely the state of the brain at a given time is analyzed and transferred to a corresponding data structure in the memory of a powerful computer where then the brainconsciousness of the individual continues to exist possibly within a suitable virtual reality There are already uite a few films or books that deal with this scenario for a positive example see the this episode of the Black Mirror series With WBE you would have an artificial entity with the cognitive performance of a human being The vastly superior processing speed of the digital versus the biological circuits will let this entity become super intelligent consider 100000 copies of a 1000x faster WBE and let it run for six months and you ll get 50 millenia worth of thinking However the main focus in the discussion about SI in this book is the further development of AI to become Super AI SAI This is not a technical book though It contains no computer code whatsoever and the math appearing twice in some info boxes is only marginal and not at all necessary for understanding One should not imagine an SI as a particularly intelligent person It might be appropriate to euate the ratio of SI to human intelligence with that of human intelligence to the cognitive performance of a mouse An SI will indeed be very very smart and unfortunately also very very unstable By that I mean that an SI will be busy at any time to changed and improve itself The SI you speak with today will be a million or times smarter tomorrow In this context the book speaks of intelligence explosion Nobody knows yet when this will start and how fast it will go Could be next year or in ten fifty or one hundred years Or perhaps never although this is highly unlikely Various scenarios are discussed in the book Also it is not clear if there will be only one SI a so called singleton or several competing or collaborating SIs with a singleton seeming to be likelyI think it s fair to say that humanity as a whole has the wish to continue to exist at least the vast majority of people do not consider the extinction of humanity desirable With that in mind it would make sense to instruct an SI to follow that same goal Now I forgot to specify the exact state in which you want to exist In this case the SI might choose to put all humans into coma less energy consumption The problem is solved from the SI s point of view its goal has been reached But obviously this is not what we meant We have to re program the SI and tweak its goal a bit Therefore it would be mandatory to always be able to control the SI It s possible an SI will not act the way we intended it will act however the way we programmed it A case of an unfriendly SI is actually very likely The book mentions and describes perverse instantiation infrastructure profusion and mind crime as possible effects The so called control problem remains unsolved as of now and it appears euivalent to that of a mouse controlling a human being Without a solution the introduction of an SI becomes a gamble with a very high probability a savage SI will wipe out humanityThe final goal of an SI should be formulated pro human if at all possible At least the elimination of humankind should not be prioritized at any time You should give the machine some kind of morality But how does one do it How can you formulate moral ideas in a computer language And what happens if our morals change over time which has happened before and the machine still decides on a then outdated moral ground In my opinion there will be insurmountable difficulties at this point Nevertheless there are also at least some theoretical approaches explained by Bostrom who is primarily a philosopher It s uite impressive to read these chapters albeit also a bit dry In general the chapters dealing with philosophical uestions and how they are translated to the SI world were the most engrossing ones for me The answers to this kind of uestions are also subject to some urgency Advances in technology generally move faster than wisdom not only in this field and the sponsors of the projects expect some return on invest Bostrom speaks of a philosophy with a deadline a fitting but also disturbing imageAnother topic is an SI that is neither malignant nor fitted with false goals something like this is also possible but on the contrary actually helps humanity uote The point of superintelligence is not to pander to human preconceptions but to make mincemeat out of our ignorance and folly Certainly this is a noble goal However how will people and I m thinking about those who are currently living react when their follies are disproved It s hard to say but I guess they will not be amused One should not trust people too much intelligence in this respect see below for my own anger Except for the sections on improving human intelligence through biological interference and breeding read eugenics I found everything in this book fascinating thought provoking and highly disturbing The book has in a way changed my world view rather drastically which is rare My folly about AI and especially Super AI has changed fundamentally In a way I ve gone through 4 of the 5 stages of grief loss Before the book I flatly denied a Super AI will ever come to fruition When I read the convincing arguments that not only an Super AI will be possible but indeed very likely my denial changed into anger In spite of the known problems and the existential risk of such a technology how can one even think to follow this slippery slope this uestion is also dealt with in the book My anger was then turned into a depression not a clinical one towards the end Still in this condition I m now awaiting acceptance which in my case will likely be fatalismA book that shook me profoundly and that I actually wished I had not read but that I still recommend highly I guess I need a superintelligence to make sense of that This work is licensed under a Creative Commons Attribution NonCommercial ShareAlike 30 Unported License

Characters Ì PDF, DOC, TXT or eBook ↠ Nick Bostrom

Superintelligence Paths Dangers Strategies

The fate of humankind depend on the actions of the machine superintelligenceBut we have one advantage we get to make the first move Will it be possible to construct a seed Artificial Intelligence to engineer initial conditions so as to make an intelligence explosion survivable How could one achieve a controlled detonati. This book if else if else if else if else if You can get most of the ideas in this book in the WaitButWhy article about AI This book assumes that an intelligence explosion is possible and that it is possible for us to make a computer whose intelligence will explode Then talks about ways to deal with it A lot of this book seems like pointless naval gazing but I think some of it is worth reading

Nick Bostrom ↠ 0 Read

Superintelligence asks the uestions what happens when machines surpass humans in general intelligence Will artificial agents save or destroy us Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life The human brain has some capabilities that the brains of other animals lack It is. There has been a spate of outbursts from physicists who should know better including Stephen Hawking saying philosophy is dead all we need now is physics or words to that effect I challenge any of them to read this book and still say that philosophy is pointlessIt s worth pointing out immediately that this isn t really a popular science book I d say the first handful of chapters are for everyone but after that the bulk of the book would probably be best for undergraduate philosophy students or AI students reading like a textbook than anything else particularly in its dogged detail but if you are interested in philosophy andor artificial intelligence don t let that put you offWhat Nick Bostrom does is to look at the implications of developing artificial intelligence that goes beyond human abilities in the general sense Of course we already have a sort of AI that goes beyond our abilities in the narrow sense of say arithmetic or playing chess In the first couple of chapters he examines how this might be possible and points out that the timescale is very vague Ever since electronic computers have been invented pundits have been putting the development of effective AI around 20 years in the future and it s still the case Even so it seems entirely feasible that we will have a than human AI a superintelligent AI by the end of the century But the how aspect is only a minor part of this bookThe real subject here is how we would deal with such a cleverer than us AI What would we ask it to do How would we motivate it How would we control it And bearing in mind it is intelligent than us how would we prevent it taking over the world or subverting the tasks we give it to its own ends It is truly fascinating concept explored in great depth here This is genuine practical philosophy The development of super AIs may well happen and if we don t think through the implications and how we would deal with it we could well be stuffed as a speciesI think it s a shame that Bostrom doesn t make use of science fiction to give examples of how people have already thought about these issues he gives only half a page to Asimov and the three laws of robotics and how Asimov then spends most of his time showing how they d go wrong but that s about it Yet there has been a lot of thought and dare I say it a lot readability than you typically get in a textbook put into the issues in science fiction than is being allowed for and it would have been worthy of a chapter in its own rightI also think a couple of the fundamentals aren t covered well enough but pretty much assumed One is that it would be impossible to contain and restrict such an AI Although some effort is put into this I m not sure there is enough thought put into the basics of ways you can pull the plug manually if necessary by shutting down the power station that provides the AI with electricityThe other dubious assertion was originally made by I J Good who worked with Alan Turing and seems to be taken as true without analysis This is the suggestion that an ultra intelligent machine would inevitably be able to design a better AI than humans so once we build one it will rapidly improve on itself producing an intelligence explosion I think the trouble with this argument is that my suspicion is that if you got hold of the million most intelligent people on earth the chances are that none of them could design an ultra powerful computer at the component level Just because something is superintelligent doesn t mean it can do this specific task well this is an assumptionHowever this doesn t set aside what a magnificent conception the book is I don t think it will appeal to many general readers but I do think it ought to be reuired reading on all philosophy undergraduate courses by anyone attempting to build AIs and by physicists who think there is no point to philosophy

10 thoughts on “Superintelligence Paths Dangers Strategies

  1. says:

    Superintelligence was published in 2014 and it's already had time to become a cult classic So with apologies for being late getting t

  2. says:

    There has been a spate of outbursts from physicists who should know better including Stephen Hawking saying ‘philosophy is dead – all we

  3. says:

    Preamble I confess to Roko's Basilisk that I didn't believe superintelligence was much of a threat and this book didn't really do anything to challenge that prior Mea Culpa Mea Culpa Mea local Maxima CulpaI Overall View I'm a software engineer with some basic experience in machine learning and though the results of machine learning have been becoming impressive and general I've never really seen where people a

  4. says:

    Imagine a Danger You may say I'm a DreamerBostrom is here to imagine a world for us and he has batshit crazy imagination have to give him that The world he imagines is a post AI world or at least a very near to AI world or a nascent AI world Don’t expect to know how we will get there only what to do if we get t

  5. says:

    In recent times prominent figures such as Stephen Hawking Bill Gates and Elon Musk have expressed serious concerns about the development of strong artificial intelligence technology arguing that the dawn of super intelligence might well bring about the end of mankind Others like Ray Kurzweil who admittedly has g

  6. says:

    As a software developer I've cared very little for artificial intelligence AI in the past My programs which I develop profe

  7. says:

    This book if else if else if else if else if You can get most of the ideas in this book in the WaitButWhy article about AI

  8. says:

    Hypothetical enough to become insanely dumb boring Superintelligence hyperintelligence hypersuperintelligence Basically it

  9. says:

    If you're into stuff like this you can read the full reviewCount of Self 0 Superintelligence Paths Dangers Strategies by Nick BostromBox 8 Anthropic capture The AI might assign a substantial probability to its simulation hypothesis the hypothesis that it is living in a computer simulationIn Superintelligence P

  10. says:

    I'm very pleased to have read this book It states concisely the general field of AI research's BIG ISSUES The paths to making AIs are only a part of the book and not a particularly important one at this pointMore interestingly it states that we need to be focused on the dangers of superintelligence Fair enough If I was an ant separated from