¿Ü±¹µµ¼
´ëÇб³Àç/Àü¹®¼Àû
°øÇаè¿
2013³â 9¿ù 9ÀÏ ÀÌÈÄ ´©Àû¼öÄ¡ÀÔ´Ï´Ù.
Á¤°¡ |
30,500¿ø |
---|
25,930¿ø (15%ÇÒÀÎ)
780P (3%Àû¸³)
ÇÒÀÎÇýÅÃ | |
---|---|
Àû¸³ÇýÅà |
|
|
|
Ãß°¡ÇýÅÃ |
|
À̺¥Æ®/±âȹÀü
¿¬°üµµ¼(1)
»óÇ°±Ç
ÀÌ»óÇ°ÀÇ ºÐ·ù
ÃâÆÇ»ç ¼Æò
¡°This is the most important book I have read in quite some time. It lucidly explains how the coming age of artificial super-intelligence threatens human control. Crucially, it also introduces a novel solution and a reason for hope.¡± -Daniel Kahneman, winner of the Nobel Prize and author of Thinking, Fast and Slow
¡°A must-read: this intellectual tour-de-force by one of AI's true pioneers not only explains the risks of ever more powerful artificial intelligence in a captivating and persuasive way, but also proposes a concrete and promising solution.¡± -Max Tegmark, author of Life 3.0
¡°A thought-provoking and highly readable account of the past, present and future of AI . . . Russell is grounded in the realities of the technology, including its many limitations, and isn¡¯t one to jump at the overheated language of sci-fi . . . If you are looking for a serious overview to the subject that doesn¡¯t talk down to its non-technical readers, this is a good place to start . . . [Russell] deploys a bracing intellectual rigour . . . But a laconic style and dry humour keep his book accessible to the lay reader.¡± -Financial Times
¡°A carefully written explanation of the concepts underlying AI as well as the history of their development. If you want to understand how fast AI is developing and why the technology is so dangerous, Human Compatible is your guide.¡± -TechCrunch
¡°Sound[s] an important alarm bell . . . Human Compatible marks a major stride in AI studies, not least in its emphasis on ethics. At the book¡¯s heart, Russell incisively discusses the misuses of AI.¡± -Nature
¡°An AI expert¡¯s chilling warning . . . Fascinating, and significant . . . Russell is not warning of the dangers of conscious machines, just that superintelligent ones might be misused or might misuse themselves.¡± -The Times (UK)
¸ñÂ÷
Chapter Page
Preface xi
Chapter 1 If We Succeed 1
Chapter 2 Intelligence in Humans and Machines 13
Chapter 3 How Might Al Progress in the Future? 62
Chapter 4 Misuses of Al 103
Chapter 5 Overly Intelligent Al 132
Chapter 6 The Not-So-Great Al Debate 145
Chapter 7 Al: A Different Approach 171
Chapter 8 Provably Beneficial Al 184
Chapter 9 Complications: Us 211
Chapter 10 Problem Solved? 246
Appendix A Searching for Solutions 257
Appendix B Knowledge and Logic 267
Appendix C Uncertainty and Probability 273
Appendix D Learning from Experience 285
Acknowledgments 297
Notes 299
Image Credits 324
Index 325
Ã¥¼Ò°³
"The most important book on AI this year." --The Guardian
"Mr. Russell's exciting book goes deep, while sparkling with dry witticisms." --The Wall Street Journal
"The most important book I have read in quite some time" (Daniel Kahneman); "A must-read" (Max Tegmark); "The book we've all been waiting for" (Sam Harris)
A leading artificial intelligence researcher lays out a new approach to AI that will enable us to coexist successfully with increasingly intelligent machines
In the popular imagination, superhuman artificial intelligence is an approaching tidal wave that threatens not just jobs and human relationships, but civilization itself. Conflict between humans and machines is seen as inevitable and its outcome all too predictable.
In this groundbreaking book, distinguished AI researcher Stuart Russell argues that this scenario can be avoided, but only if we rethink AI from the ground up. Russell begins by exploring the idea of intelligence in humans and in machines. He describes the near-term benefits we can expect, from intelligent personal assistants to vastly accelerated scientific research, and outlines the AI breakthroughs that still have to happen before we reach superhuman AI. He also spells out the ways humans are already finding to misuse AI, from lethal autonomous weapons to viral sabotage.
If the predicted breakthroughs occur and superhuman AI emerges, we will have created entities far more powerful than ourselves. How can we ensure they never, ever, have power over us? Russell suggests that we can rebuild AI on a new foundation, according to which machines are designed to be inherently uncertain about the human preferences they are required to satisfy. Such machines would be humble, altruistic, and committed to pursue our objectives, not theirs. This new foundation would allow us to create machines that are provably deferential and provably beneficial.
ÀúÀÚ¼Ò°³
»ý³â¿ùÀÏ | - |
---|
ÇöÀç ¹öŬ¸®´ëÇб³ÀÇ ÄÄÇ»ÅÍ°úÇаú ±³¼öÀÌÀÚ Center for Intelligent SystemsÀÇ Ã¥ÀÓÀÚ, ±×¸®°í °øÇÐ ½º¹Ì½º ÀÚµ¥ ¼®Á±³¼ö(Smith Zadeh Chair)ÀÌ´Ù. ¶ÇÇÑ, ÀΰøÁö´ÉÀÇ ´Ù¾çÇÑ ÁÖÁ¦¿¡ °üÇØ 100ÆíÀÌ ³Ñ´Â ³í¹®À» ¹ßÇ¥ÇßÀ¸¸ç, ±×°¡ ¾´ ´Ù¸¥ Ã¥À¸·Î´Â ¡¶The Use of Knowledge in Analogy and Induction¡·°ú ¡¶Do the Right Thing:Studies in Limited Rationality¡·°¡ ÀÖ´Ù.
ÀúÀÚÀÇ ´Ù¸¥Ã¥
Àüüº¸±âÁÖ°£·©Å·
´õº¸±â»óÇ°Á¤º¸Á¦°ø°í½Ã
À̺¥Æ® ±âȹÀü
´ëÇб³Àç/Àü¹®¼Àû ºÐ¾ß¿¡¼ ¸¹Àº ȸ¿øÀÌ ±¸¸ÅÇÑ Ã¥
ÆǸÅÀÚÁ¤º¸
»óÈ£ |
(ÁÖ)±³º¸¹®°í |
---|---|
´ëÇ¥ÀÚ¸í |
¾Èº´Çö |
»ç¾÷ÀÚµî·Ï¹øÈ£ |
102-81-11670 |
¿¬¶ôó |
1544-1900 |
ÀüÀÚ¿ìÆíÁÖ¼Ò |
callcenter@kyobobook.co.kr |
Åë½ÅÆǸž÷½Å°í¹øÈ£ |
01-0653 |
¿µ¾÷¼ÒÀçÁö |
¼¿ïƯº°½Ã Á¾·Î±¸ Á¾·Î 1(Á¾·Î1°¡,±³º¸ºôµù) |
±³È¯/ȯºÒ
¹ÝÇ°/±³È¯ ¹æ¹ý |
¡®¸¶ÀÌÆäÀÌÁö > Ãë¼Ò/¹ÝÇ°/±³È¯/ȯºÒ¡¯ ¿¡¼ ½Åû ¶Ç´Â 1:1 ¹®ÀÇ °Ô½ÃÆÇ ¹× °í°´¼¾ÅÍ(1577-2555)¿¡¼ ½Åû °¡´É |
---|---|
¹ÝÇ°/±³È¯°¡´É ±â°£ |
º¯½É ¹ÝÇ°ÀÇ °æ¿ì Ãâ°í¿Ï·á ÈÄ 6ÀÏ(¿µ¾÷ÀÏ ±âÁØ) À̳»±îÁö¸¸ °¡´É |
¹ÝÇ°/±³È¯ ºñ¿ë |
º¯½É ȤÀº ±¸¸ÅÂø¿À·Î ÀÎÇÑ ¹ÝÇ°/±³È¯Àº ¹Ý¼Û·á °í°´ ºÎ´ã |
¹ÝÇ°/±³È¯ ºÒ°¡ »çÀ¯ |
·¼ÒºñÀÚÀÇ Ã¥ÀÓ ÀÖ´Â »çÀ¯·Î »óÇ° µîÀÌ ¼Õ½Ç ¶Ç´Â ÈÑ¼ÕµÈ °æ¿ì ·¼ÒºñÀÚÀÇ »ç¿ë, Æ÷Àå °³ºÀ¿¡ ÀÇÇØ »óÇ° µîÀÇ °¡Ä¡°¡ ÇöÀúÈ÷ °¨¼ÒÇÑ °æ¿ì ·º¹Á¦°¡ °¡´ÉÇÑ »óÇ° µîÀÇ Æ÷ÀåÀ» ÈѼÕÇÑ °æ¿ì ·½Ã°£ÀÇ °æ°ú¿¡ ÀÇÇØ ÀçÆǸŰ¡ °ï¶õÇÑ Á¤µµ·Î °¡Ä¡°¡ ÇöÀúÈ÷ °¨¼ÒÇÑ °æ¿ì ·ÀüÀÚ»ó°Å·¡ µî¿¡¼ÀÇ ¼ÒºñÀÚº¸È£¿¡ °üÇÑ ¹ý·üÀÌ Á¤ÇÏ´Â ¼ÒºñÀÚ Ã»¾àöȸ Á¦ÇÑ ³»¿ë¿¡ ÇØ´çµÇ´Â °æ¿ì |
»óÇ° Ç°Àý |
°ø±Þ»ç(ÃâÆÇ»ç) Àç°í »çÁ¤¿¡ ÀÇÇØ Ç°Àý/Áö¿¬µÉ ¼ö ÀÖÀ½ |
¼ÒºñÀÚ ÇÇÇغ¸»ó |
·»óÇ°ÀÇ ºÒ·®¿¡ ÀÇÇÑ ±³È¯, A/S, ȯºÒ, Ç°Áúº¸Áõ ¹× ÇÇÇغ¸»ó µî¿¡ °üÇÑ »çÇ×Àº¼ÒºñÀÚºÐÀïÇØ°á ±âÁØ (°øÁ¤°Å·¡À§¿øȸ °í½Ã)¿¡ ÁØÇÏ¿© ó¸®µÊ ·´ë±Ý ȯºÒ ¹× ȯºÒÁö¿¬¿¡ µû¸¥ ¹è»ó±Ý Áö±Þ Á¶°Ç, ÀýÂ÷ µîÀº ÀüÀÚ»ó°Å·¡ µî¿¡¼ÀǼҺñÀÚ º¸È£¿¡ °üÇÑ ¹ý·ü¿¡ µû¶ó ó¸®ÇÔ |
(ÁÖ)ÀÎÅÍÆÄÅ©Ä¿¸Ó½º´Â ȸ¿ø´ÔµéÀÇ ¾ÈÀü°Å·¡¸¦ À§ÇØ ±¸¸Å±Ý¾×, °áÁ¦¼ö´Ü¿¡ »ó°ü¾øÀÌ (ÁÖ)ÀÎÅÍÆÄÅ©Ä¿¸Ó½º¸¦ ÅëÇÑ ¸ðµç °Å·¡¿¡ ´ëÇÏ¿©
(ÁÖ)KGÀ̴Ͻýº°¡ Á¦°øÇÏ´Â ±¸¸Å¾ÈÀü¼ºñ½º¸¦ Àû¿ëÇÏ°í ÀÖ½À´Ï´Ù.
¹è¼Û¾È³»
±³º¸¹®°í »óÇ°Àº Åùè·Î ¹è¼ÛµÇ¸ç, Ãâ°í¿Ï·á 1~2Àϳ» »óÇ°À» ¹Þ¾Æ º¸½Ç ¼ö ÀÖ½À´Ï´Ù.
Ãâ°í°¡´É ½Ã°£ÀÌ ¼·Î ´Ù¸¥ »óÇ°À» ÇÔ²² ÁÖ¹®ÇÒ °æ¿ì Ãâ°í°¡´É ½Ã°£ÀÌ °¡Àå ±ä »óÇ°À» ±âÁØÀ¸·Î ¹è¼ÛµË´Ï´Ù.
±ººÎ´ë, ±³µµ¼Ò µî ƯÁ¤±â°üÀº ¿ìü±¹ Åù踸 ¹è¼Û°¡´ÉÇÕ´Ï´Ù.
¹è¼Ûºñ´Â ¾÷ü ¹è¼Ûºñ Á¤Ã¥¿¡ µû¸¨´Ï´Ù.