¿Ü±¹µµ¼
ÄÄÇ»ÅÍ
ÀÎÅͳÝ/À¥ °³¹ß
2013³â 9¿ù 9ÀÏ ÀÌÈÄ ´©Àû¼öÄ¡ÀÔ´Ï´Ù.
Á¤°¡ |
110,000¿ø |
---|
110,000¿ø
3,300P (3%Àû¸³)
ÇÒÀÎÇýÅÃ | |
---|---|
Àû¸³ÇýÅà |
|
|
|
Ãß°¡ÇýÅÃ |
|
À̺¥Æ®/±âȹÀü
¿¬°üµµ¼(1)
»óÇ°±Ç
ÀÌ»óÇ°ÀÇ ºÐ·ù
º»¹®Áß¿¡¼
From the Back Cover
This textbook introduces linear algebra and optimization in the context of machine learning. Examples and exercises are provided throughout this text book together with access to a solution¡¯s manual. This textbook targets graduate level students and professors in computer science, mathematics and data science. Advanced undergraduate students can also use this textbook. The chapters for this textbook are organized as follows:
1. Linear algebra and its applications: The chapters focus on the basics of linear algebra together with their common applications to singular value decomposition, matrix factorization, similarity matrices (kernel methods), and graph analysis. Numerous machine learning applications have been used as examples, such as spectral clustering, kernel-based classification, and outlier detection. The tight integration of linear algebra methods with examples from machine learning differentiates this book from generic volumes on linear algebra. The focus is clearly on the most relevant aspects of linear algebra for machine learning and to teach readers how to apply these concepts.
2. Optimization and its applications: Much of machine learning is posed as an optimization problem in which we try to maximize the accuracy of regression and classification models. The ¡°parent problem¡± of optimization-centric machine learning is least-squares regression. Interestingly, this problem arises in both linear algebra and optimization, and is one of the key connecting problems of the two fields. Least-squares regression is also the starting point for support vector machines, logistic regression, and recommender systems. Furthermore, the methods for dimensionality reduction and matrix factorization also require the development of optimization methods. A general view of optimization in computational graphs is discussed together with its applications to back propagation in neural networks.
A frequent challenge faced by beginners in machine learning is the extensive background required in linear algebra and optimization. One problem is that the existing linear algebra and optimization courses are not specific to machine learning; therefore, one would typically have to complete more course material than is necessary to pick up machine learning. Furthermore, certain types of ideas and tricks from optimization and linear algebra recur more frequently in machine learning than other application-centric settings. Therefore, there is significant value in developing a view of linear algebra and optimization that is better suited to the specific perspective of machine learning.
Ã¥¼Ò°³
This textbook introduces linear algebra and optimization in the context of machine learning. Examples and exercises are provided throughout this text book together with access to a solution¡¯s manual. This textbook targets graduate level students and professors in computer science, mathematics and data science. Advanced undergraduate students can also use this textbook. The chapters for this textbook are organized as follows:
1. Linear algebra and its applications: The chapters focus on the basics of linear algebra together with their common applications to singular value decomposition, matrix factorization, similarity matrices (kernel methods), and graph analysis. Numerous machine learning applications have been used as examples, such as spectral clustering, kernel-based classification, and outlier detection. The tight integration of linear algebra methods with examples from machine learning differentiates this book from generic volumes on linear algebra. The focus is clearly on the most relevant aspects of linear algebra for machine learning and to teach readers how to apply these concepts.
2. Optimization and its applications: Much of machine learning is posed as an optimization problem in which we try to maximize the accuracy of regression and classification models. The ¡°parent problem¡± of optimization-centric machine learning is least-squares regression. Interestingly, this problem arises in both linear algebra and optimization, and is one of the key connecting problems of the two fields. Least-squares regression is also the starting point for support vector machines, logistic regression, and recommender systems. Furthermore, the methods for dimensionality reduction and matrix factorization also require the development of optimization methods. A general view of optimization in computational graphs is discussed together with its applications to back propagation in neural networks.
A frequent challenge faced by beginners in machine learning is the extensive background required in linear algebra and optimization. One problem is that the existing linear algebra and optimization courses are not specific to machine learning; therefore, one would typically have to complete more course material than is necessary to pick up machine learning. Furthermore, certain types of ideas and tricks from optimization and linear algebra recur more frequently in machine learning than other application-centric settings. Therefore, there is significant value in developing a view of linear algebra and optimization that is better suited to the specific perspective of machine learning.
ÀúÀÚ¼Ò°³
»ý³â¿ùÀÏ | - |
---|
´º¿å ¿äũŸ¿î ÇÏÀÌÃ÷ÀÇ IBM T. J. ¿Ó½¼ ¸®¼Ä¡ ¼¾ÅÍÀÇ ¶Ù¾î³ ¿¬±¸ ȸ¿ø(DRSM)ÀÌ´Ù. 1993³â¿¡ IIT Kanpur¿¡¼ Çлç ÇÐÀ§¸¦ ¹Þ¾Ò°í, 1996³â¿¡ MIT¿¡¼ ¹Ú»ç ÇÐÀ§¸¦ ¹Þ¾Ò´Ù. µ¥ÀÌÅÍ ¸¶ÀÌ´× ºÐ¾ß¿¡¼ Æø³Ð°Ô ÀÏÇØ¿Ô°í, 400°³ ÀÌ»óÀÇ ³í¹®À» ÄÜÆÛ·±½º¿Í ÇмúÁö¿¡ ¹ßÇ¥ÇßÀ¸¸ç 80°³ ÀÌ»óÀÇ Æ¯Çã±ÇÀÌ ÀÖ´Ù. µ¥ÀÌÅÍ ¸¶À̴׿¡ °üÇÑ ±³°ú¼, ƯÀÌÄ¡ ºÐ¼®¿¡ °üÇÑ Æ÷°ýÀûÀΠåÀ» Æ÷ÇÔÇÑ 15±ÇÀÇ Ã¥À» Àú¼úÇϰųª ÆíÁýÇß´Ù. ƯÇãÀÇ »ó¾÷Àû °¡Ä¡ ´öºÐ¿¡ IBM¿¡¼ ¸¶½ºÅÍ ¹ß¸í°¡·Î ¼¼ ¹øÀ̳ª ÁöÁ¤µÆ´Ù. µ¥ÀÌÅÍ ½ºÆ®¸²ÀÇ »ý¹° Å×·¯¸®½ºÆ® À§Çù ŽÁö¿¡ ´ëÇÑ ¿¬±¸·Î IBM ±â¾÷»ó(2003)À» ¼ö»ó Çß°í, ÇÁ¶óÀ̹ö½Ã ±â¼ú¿¡ ´ëÇÑ °úÇÐÀûÀÎ °øÇåÀ¸·Î IBM ¿ì¼ö Çõ½Å»ó(2008)À» ¼ö»óÇß´Ù. µ¥ÀÌÅÍ ½ºÆ®¸² ¹× °íÂ÷¿øÀûÀÎ ÀÛ¾÷¿¡ ´ëÇÑ °¢°¢ÀÇ ÀÛ¾÷À» ÀÎÁ¤¹Þ¾Æ µÎ °³ÀÇ IBM ¿ì¼ö ±â¼ú ¼º°ú»ó(2009, 2015)À» ¼ö»óÇß´Ù. ÀÀÃà ±â¹Ý ÇÁ¶óÀ̹ö½Ã º¸Á¸ µ¥ÀÌÅÍ ¸¶À̴׿¡ ´ëÇÑ ¿¬±¸·Î EDBT 2014 Test of Time Award¸¦ ¼ö»óÇß´Ù. ¶ÇÇÑ µ¥ÀÌÅÍ ¸¶ÀÌ´× ºÐ¾ß¿¡¼ ¿µÇâ·Â ÀÖ´Â ¿¬±¸ °øÇå¿¡ ´ëÇÑ µÎ °¡Áö ÃÖ°í»ó Áß ÇϳªÀÎ IEEE ICDM ¿¬±¸ °øÇå»ó(2015)À» ¼ö»óÇß´Ù. IEEE ºòµ¥ÀÌÅÍ ÄÜÆÛ·±½º(2014)ÀÇ ÃÑ°ý °øµ¿ ÀÇÀåÁ÷°ú, ACM CIKM ÄÜÆÛ·±½º(2015), IEEE ICDM ÄÜÆÛ·±½º(2015), ACM KDD ÄÜÆÛ·±½º(2016) ÇÁ·Î±×·¥ °øµ¿ ÀÇÀåÁ÷À» ¿ªÀÓÇß´Ù. 2004³âºÎÅÍ 2008³â±îÁö ¡¸IEEE Transactions on Knowledge and Data Engineering¡¹ÀÇ ºÎÆíÁýÀåÀ¸·Î ±Ù¹«Çß´Ù. ¡¸ACM Transactions on Knowledge Discovery from Data¡¹ÀÇ ºÎÆíÁýÀå, ¡¸IEEE Transactions on Big Data¡¹ÀÇ ºÎÆíÁýÀå, ¡¸Data Mining and Knowledge Discovery Journal¡¹°ú ¡¸ACM SIGKDD Exploration¡¹ÀÇ ÆíÁýÀå, ¡¸Knowledge and Information Systems Journal¡¹ÀÇ ºÎÆíÁýÀåÀÌ´Ù. SpringerÀÇ °£Ç๰ÀÎ ¡¸Lecture Notes on Social Networks¡¹ ÀÚ¹® À§¿øȸ¿¡¼ È°µ¿ÇÏ°í ÀÖÀ¸¸ç µ¥ÀÌÅÍ ¸¶À̴׿¡ °üÇÑ SIAM È°µ¿ ±×·ìÀÇ ºÎ»çÀåÀ» ¿ªÀÓÇß´Ù. ¡°contributions to knowledge discovery and data mining algorithms¡±¿¡ °üÇÑ SIAM, ACM, IEEEÀÇ Æç·Î¿ì´Ù.
ÆîÃ帱âÀúÀÚÀÇ ´Ù¸¥Ã¥
Àüüº¸±âÁÖ°£·©Å·
´õº¸±â»óÇ°Á¤º¸Á¦°ø°í½Ã
À̺¥Æ® ±âȹÀü
ÄÄÇ»ÅÍ ºÐ¾ß¿¡¼ ¸¹Àº ȸ¿øÀÌ ±¸¸ÅÇÑ Ã¥
ÆǸÅÀÚÁ¤º¸
»óÈ£ |
(ÁÖ)±³º¸¹®°í |
---|---|
´ëÇ¥ÀÚ¸í |
¾Èº´Çö |
»ç¾÷ÀÚµî·Ï¹øÈ£ |
102-81-11670 |
¿¬¶ôó |
1544-1900 |
ÀüÀÚ¿ìÆíÁÖ¼Ò |
callcenter@kyobobook.co.kr |
Åë½ÅÆǸž÷½Å°í¹øÈ£ |
01-0653 |
¿µ¾÷¼ÒÀçÁö |
¼¿ïƯº°½Ã Á¾·Î±¸ Á¾·Î 1(Á¾·Î1°¡,±³º¸ºôµù) |
±³È¯/ȯºÒ
¹ÝÇ°/±³È¯ ¹æ¹ý |
¡®¸¶ÀÌÆäÀÌÁö > Ãë¼Ò/¹ÝÇ°/±³È¯/ȯºÒ¡¯ ¿¡¼ ½Åû ¶Ç´Â 1:1 ¹®ÀÇ °Ô½ÃÆÇ ¹× °í°´¼¾ÅÍ(1577-2555)¿¡¼ ½Åû °¡´É |
---|---|
¹ÝÇ°/±³È¯°¡´É ±â°£ |
º¯½É ¹ÝÇ°ÀÇ °æ¿ì Ãâ°í¿Ï·á ÈÄ 6ÀÏ(¿µ¾÷ÀÏ ±âÁØ) À̳»±îÁö¸¸ °¡´É |
¹ÝÇ°/±³È¯ ºñ¿ë |
º¯½É ȤÀº ±¸¸ÅÂø¿À·Î ÀÎÇÑ ¹ÝÇ°/±³È¯Àº ¹Ý¼Û·á °í°´ ºÎ´ã |
¹ÝÇ°/±³È¯ ºÒ°¡ »çÀ¯ |
·¼ÒºñÀÚÀÇ Ã¥ÀÓ ÀÖ´Â »çÀ¯·Î »óÇ° µîÀÌ ¼Õ½Ç ¶Ç´Â ÈÑ¼ÕµÈ °æ¿ì ·¼ÒºñÀÚÀÇ »ç¿ë, Æ÷Àå °³ºÀ¿¡ ÀÇÇØ »óÇ° µîÀÇ °¡Ä¡°¡ ÇöÀúÈ÷ °¨¼ÒÇÑ °æ¿ì ·º¹Á¦°¡ °¡´ÉÇÑ »óÇ° µîÀÇ Æ÷ÀåÀ» ÈѼÕÇÑ °æ¿ì ·½Ã°£ÀÇ °æ°ú¿¡ ÀÇÇØ ÀçÆǸŰ¡ °ï¶õÇÑ Á¤µµ·Î °¡Ä¡°¡ ÇöÀúÈ÷ °¨¼ÒÇÑ °æ¿ì ·ÀüÀÚ»ó°Å·¡ µî¿¡¼ÀÇ ¼ÒºñÀÚº¸È£¿¡ °üÇÑ ¹ý·üÀÌ Á¤ÇÏ´Â ¼ÒºñÀÚ Ã»¾àöȸ Á¦ÇÑ ³»¿ë¿¡ ÇØ´çµÇ´Â °æ¿ì |
»óÇ° Ç°Àý |
°ø±Þ»ç(ÃâÆÇ»ç) Àç°í »çÁ¤¿¡ ÀÇÇØ Ç°Àý/Áö¿¬µÉ ¼ö ÀÖÀ½ |
¼ÒºñÀÚ ÇÇÇغ¸»ó |
·»óÇ°ÀÇ ºÒ·®¿¡ ÀÇÇÑ ±³È¯, A/S, ȯºÒ, Ç°Áúº¸Áõ ¹× ÇÇÇغ¸»ó µî¿¡ °üÇÑ »çÇ×Àº¼ÒºñÀÚºÐÀïÇØ°á ±âÁØ (°øÁ¤°Å·¡À§¿øȸ °í½Ã)¿¡ ÁØÇÏ¿© ó¸®µÊ ·´ë±Ý ȯºÒ ¹× ȯºÒÁö¿¬¿¡ µû¸¥ ¹è»ó±Ý Áö±Þ Á¶°Ç, ÀýÂ÷ µîÀº ÀüÀÚ»ó°Å·¡ µî¿¡¼ÀǼҺñÀÚ º¸È£¿¡ °üÇÑ ¹ý·ü¿¡ µû¶ó ó¸®ÇÔ |
(ÁÖ)ÀÎÅÍÆÄÅ©Ä¿¸Ó½º´Â ȸ¿ø´ÔµéÀÇ ¾ÈÀü°Å·¡¸¦ À§ÇØ ±¸¸Å±Ý¾×, °áÁ¦¼ö´Ü¿¡ »ó°ü¾øÀÌ (ÁÖ)ÀÎÅÍÆÄÅ©Ä¿¸Ó½º¸¦ ÅëÇÑ ¸ðµç °Å·¡¿¡ ´ëÇÏ¿©
(ÁÖ)KGÀ̴Ͻýº°¡ Á¦°øÇÏ´Â ±¸¸Å¾ÈÀü¼ºñ½º¸¦ Àû¿ëÇÏ°í ÀÖ½À´Ï´Ù.
¹è¼Û¾È³»
±³º¸¹®°í »óÇ°Àº Åùè·Î ¹è¼ÛµÇ¸ç, Ãâ°í¿Ï·á 1~2Àϳ» »óÇ°À» ¹Þ¾Æ º¸½Ç ¼ö ÀÖ½À´Ï´Ù.
Ãâ°í°¡´É ½Ã°£ÀÌ ¼·Î ´Ù¸¥ »óÇ°À» ÇÔ²² ÁÖ¹®ÇÒ °æ¿ì Ãâ°í°¡´É ½Ã°£ÀÌ °¡Àå ±ä »óÇ°À» ±âÁØÀ¸·Î ¹è¼ÛµË´Ï´Ù.
±ººÎ´ë, ±³µµ¼Ò µî ƯÁ¤±â°üÀº ¿ìü±¹ Åù踸 ¹è¼Û°¡´ÉÇÕ´Ï´Ù.
¹è¼Ûºñ´Â ¾÷ü ¹è¼Ûºñ Á¤Ã¥¿¡ µû¸¨´Ï´Ù.