°£Æí°áÁ¦, ½Å¿ëÄ«µå û±¸ÇÒÀÎ
ÀÎÅÍÆÄÅ© ·Ôµ¥Ä«µå 5% (23,750¿ø)
(ÃÖ´ëÇÒÀÎ 10¸¸¿ø / Àü¿ù½ÇÀû 40¸¸¿ø)
ºÏÇǴϾð ·Ôµ¥Ä«µå 30% (17,500¿ø)
(ÃÖ´ëÇÒÀÎ 3¸¸¿ø / 3¸¸¿ø ÀÌ»ó °áÁ¦)
NH¼îÇÎ&ÀÎÅÍÆÄÅ©Ä«µå 20% (20,000¿ø)
(ÃÖ´ëÇÒÀÎ 4¸¸¿ø / 2¸¸¿ø ÀÌ»ó °áÁ¦)
Close

Parallel Computer Architecture : A Hardware/Software Approach [¾çÀå]

¼Òµæ°øÁ¦

2013³â 9¿ù 9ÀÏ ÀÌÈÄ ´©Àû¼öÄ¡ÀÔ´Ï´Ù.

°øÀ¯Çϱâ
  • Àú : Culler
  • ÃâÆÇ»ç : Morgan Kaufman
  • ¹ßÇà : 1999³â 01¿ù 01ÀÏ
  • Âʼö : 1025
  • ISBN : 9781558603431
Á¤°¡

25,000¿ø

  • 25,000¿ø

    750P (3%Àû¸³)

ÇÒÀÎÇýÅÃ
Àû¸³ÇýÅÃ
  • S-Point Àû¸³Àº ¸¶ÀÌÆäÀÌÁö¿¡¼­ Á÷Á¢ ±¸¸ÅÈ®Á¤ÇϽŠ°æ¿ì¸¸ Àû¸³ µË´Ï´Ù.
Ãß°¡ÇýÅÃ
¹è¼ÛÁ¤º¸
  • 4/20(Åä) À̳» ¹ß¼Û ¿¹Á¤  (¼­¿ï½Ã °­³²±¸ »ï¼º·Î 512)
  • ¹«·á¹è¼Û
ÁÖ¹®¼ö·®
°¨¼Ò Áõ°¡
  • À̺¥Æ®/±âȹÀü

  • ¿¬°üµµ¼­

  • »óÇ°±Ç

AD

¸ñÂ÷

Foreword ix (12)
Preface xxi
1 Introduction 1 (74)
1.1 Why Parallel Architecture 4 (21)
1.1.1 Application Trends 6 (6)
1.1.2 Technology Trends 12 (2)
1.1.3 Architectural Trends 14 (7)
1.1.4 Supercomputers 21 (2)
1.1.5 Summary 23 (2)
1.2 Convergence of Parallel Architectures 25 (27)
1.2.1 Communication Architecture 25 (3)
1.2.2 Shared Address Space 28 (9)
1.2.3 Message Passing 37 (5)
1.2.4 Convergence 42 (2)
1.2.5 Data Parallel Processing 44 (3)
1.2.6 Other Parallel Architectures 47 (3)
1.2.7 A Generic Parallel Architecture 50 (2)
1.3 Fundamental Design Issues 52 (11)
1.3.1 Communication Abstraction 53 (1)
1.3.2 Programming Model Requirements 53 (5)
1.3.3 Communication and Replication 58 (1)
1.3.4 Performance 59 (4)
1.3.5 Summary 63 (1)
1.4 Concluding Remarks 63 (3)
1.5 Historical References 66 (4)
1.6 Exercises 70 (5)
2 Parallel Programs 75 (46)
2.1 Parallel Application Case Studies 76 (5)
2.1.1 Simulating Ocean Currents 77 (1)
2.1.2 Simulating the Evolution of 78 (1)
Galaxies
2.1.3 Visualizing Complex Scenes Using 79 (1)
Ray Tracing
2.1.4 Mining Data for Associations 80 (1)
2.2 The Parallelization process 81 (11)
2.2.1 Steps in the Process 82 (8)
2.2.2 Parallelizing Computation versus 90 (1)
Data
2.2.3 Goals of the Parallelization 91 (1)
process
2.3 parallelization of an Example Program 92 (24)
2.3.1 The Equation Solver kernel 92 (1)
2.3.2 Decomposition 93 (5)
2.3.3 Assignment 98 (1)
2.3.4 Orchestration under the Data 99 (2)
Parallel Model
2.3.5 Orchestration under the Shared 101(7)
Address Space Model
2.3.6 Orchestration under the 108(8)
Message-Passing Model
2.4 Concluding Remarks 116(1)
2.5 Exercises 117(4)
3 Programming for Performance 121(78)
3.1 Partitioning for Performance 123(14)
3.1.1 Load Balance and Synchronization 123(8)
Wait Time
3.1.2 Reducing Inherent Communication 131(4)
3.1.3 Reducing the Extra Work 135(1)
3.1.4 Summary 136(1)
3.2 Data Access and Communication in a 137(5)
Multimemory System
3.2.1 A Multiprocessor as an Extended 138(1)
Memory Hierarchy
3.2.2 Artifactual Communication in the 139(1)
Extended Memory Hierarachy
3.2.3 Artifactual Communication and 140(2)
Replication: The Working Set Perspective
3.3 Orchestration for Performance 142(14)
3.3.1 Reducing Artifactual Communication 142(8)
3.3.2 Structuring Communication to 150(6)
Reduce Cost
3.4 Performance Factors from the 156(4)
Processor's Perspective
3.5 The Parallel Application Case Studies: 160(22)
An In-Depth Look
3.5.1 Ocean 161(5)
3.5.2 Barnes-Hut 166(8)
3.5.3 Raytrace 174(4)
3.5.4 Data Mining 178(4)
3.6 Implications for Programming Models 182(8)
3.6.1 Naming 184(1)
3.6.2 Replication 184(2)
3.6.3 Overhead and Granularity of 186(1)
Communication
3.6.4 Block Data Transfer 187(1)
3.6.5 Synchronization 188(1)
3.6.6 Hardware Cost and Design Complexity 188(1)
3.6.7 Performance Model 189(1)
3.6.8 Summary 189(1)
3.7 Concluding Remarks 190(2)
3.8 Exercises 192(7)
4 Workload-Driven Evaluation 199(70)
4.1 Scaling Workloads and Machines 202(13)
4.1.1 Basic Measures of Multiprocessor 202(2)
Performance
4.1.2 Why Worry about Scaling? 204(2)
4.1.3 Key Issues in Scaling 206(1)
4.1.4 Scaling Models and Speedup Measures 207(4)
4.1.5 Impact of Scaling Models on the 211(2)
Equation Solver Kernel
4.1.6 Scaling Workload Parameters 213(2)
4.2 Evaluating a Real Machine 215(16)
4.2.1 Performance Isolation Using 215(1)
Microbenchmarks
4.2.2 Choosing Workloads 216(5)
4.2.3 Evaluating a Fixed-Size Machine 221(5)
4.2.4 Varying Machine Size 226(2)
4.2.5 Choosing Performance Metrics 228(3)
4.3 Evaluating an Architectural Idea or 231(12)
Trade-off
4.3.1 Multiprocessor Simulation 233(1)
4.3.2 Scaling Down Problem and Machine 234(4)
Parameters for Simulation
4.3.3 Dealing with the Parameter Space: 238(5)
An Example Evaluation
4.3.4 Summary 243(1)
4.4 Illustrating Workload Characterization 243(19)
4.4.1 Workload Case Studies 244(9)
4.4.2 Workload Characteristics 253(9)
4.5 Concluding Remarks 262(1)
4.6 Exercises 263(6)
5 Shared Memory Multiprocessors 269(108)
5.1 Cache Coherence 273(10)
5.1.1 The Cache Coherence Problem 273(4)
5.1.2 Cache Coherence through Bus 277(6)
Snooping
5.2 Memory Consistency 283(8)
5.2.1 Sequential Consistency 286(3)
5.2.2 Sufficient Conditions for 289(2)
Preserving Sequential Consistency
5.3 Design Space for Snooping Protocols 291(14)
5.3.1 A Three-State (MSI) Write-Back 293(6)
Invalidation Protocol
5.3.2 A Four-State (MESI) Write-Back 299(2)
Invalidation Protocol
5.3.3 A Four-State (Dragon) Write-Back 301(4)
Update Protocol
5.4 Assessing Protocol Design Trade-offs 305(29)
5.4.1 Methodology 306(1)
5.4.2 Bandwidth Requirement under the 307(4)
MESI Protocol
5.4.3 Impact of Protocol Optimizations 311(2)
5.4.4 Trade-Offs in Cache Block Size 313(16)
5.4.5 Update-Based versus 329(5)
Invalidation-Based Protocols
5.5 Synchronization 334(25)
5.5.1 Components of a Synchronization 335(1)
Event
5.5.2 Role of the User and System 336(1)
5.5.3 Mutual Exclusion 337(15)
5.5.4 Point-to-Point Event 352(1)
Synchronization
5.5.5 Global (Barrier) Event 353(5)
Synchronization
5.5.6 Synchronization Summary 358(1)
5.6 Implications for Software 359(7)
5.7 Concluding Remarks 366(1)
5.8 Exercises 367(10)
6 Snoop-Based Multiprocessor Design 377(76)
6.1 Correctness Requirements 378(2)
6.2 Base Design: Single-Level Caches with 380(13)
an Atomic Bus
6.2.1 Cache Controller and Tag Design 381(1)
6.2.2 Reporting Snoop Results 382(2)
6.2.3 Dealing with Write Backs 384(1)
6.2.4 Base Organization 385(1)
6.2.5 Nonatomic State Transitions 385(3)
6.2.6 Serialization 388(2)
6.2.7 Deadlock 390(1)
6.2.8 Livelock and Starvation 390(1)
6.2.9 Implementing Atomic Operations 391(2)
6.3 Multilevel Cache Hierarchies 393(5)
6.3.1 Maintaining Inclusion 394(2)
6.3.2 Propagating Transactions for 396(2)
Coherence in the Hierarchy
6.4 Split-Transaction Bus 398(17)
6.4.1 An Example Split-Transaction Design 400(1)
6.4.2 Bus Design and Request-Response 400(2)
Matching
6.4.3 Snoop Results and Conflicting 402(2)
Requests
6.4.4 Flow Control 404(1)
6.4.5 Path of a Cache Miss 404(2)
6.4.6 Serialization and Sequential 406(3)
Consistency
6.4.7 Alternative Design Choices 409(1)
6.4.8 Split-Transaction Bus with 410(3)
Multilevel Caches
6.4.9 Supporting Multiple Outstanding 413(2)
Misses from a Processor
6.5 Case Studies: SGI Challenge and Sun 415(18)
Enterprise 6000
6.5.1 SGI Powerpath-2 System Bus 417(3)
6.5.2 SGI Processor and Memory Subsystems 420(2)
6.5.3 SGI I/O Subsystems 422(2)
6.5.4 SGI Challenge Memory System 424(1)
Performance
6.5.5 Sun Gigaplane System Bus 424(3)
6.5.6 Sun Processor and Memory Subsystem 427(2)
6.5.7 Sun I/O Subsystem 429(1)
6.5.8 Sun Enterprise Memory System 429(1)
Performance
6.5.9 Application Performance 429(4)
6.6 Extending Cache Coherence 433(13)
6.6.1 Shared Cache Designs 434(3)
6.6.2 Coherence for Virtually Indexed 437(2)
Caches
6.6.3 Translation Lookaside Buffer 439(2)
Coherence
6.6.4 Snoop-Based Cache Coherence on 441(4)
Rings
6.6.5 Scaling Data and Snoop Bandwidth 445(1)
in Bus-Based Systems
6.7 Concluding Remarks 446(1)
6.8 Exercises 446(7)
7 Scalable Multiprocessors 453(100)
7.1 Scalability 456(12)
7.1.1 Bandwidth Scaling 457(3)
7.1.2 Latency Scaling 460(1)
7.1.3 Cost Scaling 461(1)
7.1.4 Physical Scaling 462(5)
7.1.5 Scaling in a Generic Parallel 467(1)
Architecture
7.2 Realizing Programming Models 468(18)
7.2.1 Primitive Network Transactions 470(3)
7.2.2 Shared Address Space 473(3)
7.2.3 Message Passing 476(5)
7.2.4 Active Messages 481(1)
7.2.5 Common Challenges 482(3)
7.2.6 Communication Architecture Design 485(1)
Space
7.3 Physical DMA 486(5)
7.3.1 Node-to-Network Interface 486(2)
7.3.2 Implementing Communication 488(1)
Abstractions
7.3.3 A Case Study: nCUBE/2 488(2)
7.3.4 Typical LAN Interfaces 490(1)
7.4 User-Level Access 491(5)
7.4.1 Node-to-Network Interface 491(2)
7.4.2 Case Study: Thinking Machines CM-5 493(1)
7.4.3 User-Level Handlers 494(2)
7.5 Dedicated Message Processing 496(10)
7.5.1 Case Study: Intel Paragon 499(4)
7.5.2 Case Study: Meiko CS-2 503(3)
7.6 Shared Physical Address Space 506(7)
7.6.1 Case Study: CRAY T3D 508(4)
7.6.2 Case Study: CRAY T3E 512(1)
7.6.3 Summary 513(1)
7.7 Clusters and Networks of Workstations 513(9)
7.7.1 Case Study: Myrinet SBUS Lanai 516(2)
7.7.2 Case Study: PCI Memory Channel 518(4)
7.8 Implications for Paraller Software 522(16)
7.8.1 Network Transaction Performance 522(5)
7.8.2 Shared Address Space Operations 527(1)
7.8.3 Message-Passing Operations 528(3)
7.8.4 Application-Level Performance 531(7)
7.9 Synchronization 538(10)
7.9.1 Algorithms for Locks 538(4)
7.9.2 Algorithms for Barriers 542(6)
7.10 Concluding Remarks 548(1)
7.11 Exercises 548(5)
8 Directory-Based Cache Coherence 553(126)
8.1 Scalable Cache Coherence 558(1)
8.2 Overview of Directory-Based Approaches 559(12)
8.2.1 Operation of a Simple Directory 560(4)
Scheme
8.2.2 Scaling 564(1)
8.2.3 Alternatives for Organizing 565(6)
Directories
8.3 Assessing Directory Protocols and 571(8)
Trade-Offs
8.3.1 Data Sharing Patterns for 571(7)
Directory Schemes
8.3.2 Local versus Remote Traffic 578(1)
8.3.3 Cache Block Size Effects 579(1)
8.4 Design Challenges for Directory 579(17)
Protocols
8.4.1 Performance 584(5)
8.4.2 Correctness 589(7)
8.5 Memory-Based Directory Protocols: The 596(26)
SGI Origin System
8.5.1 Cache Coherence Protocol 597(7)
8.5.2 Dealing with Correctness Issues 604(5)
8.5.3 Details of Directory Structure 609(1)
8.5.4 Protocol Extensions 610(2)
8.5.5 Overview of the Origin2000 Hardware 612(2)
8.5.6 Hub Implementation 614(4)
8.5.7 Performance Characteristics 618(4)
8.6 Cache-Based Directory Protocols: The 622(23)
Sequent NUMA-Q
8.6.1 Cache Coherence Protocol 624(8)
8.6.2 Dealing with Correctness Issues 632(2)
8.6.3 Protocol Extensions 634(1)
8.6.4 Overview of NUMA-Q Hardware 635(2)
8.6.5 Protocol Interactions with SMP Node 639(2)
8.6.6 IQ-Link Implementation 639(2)
8.6.7 Performance Characteristics 641(2)
8.6.8 Comparison Case Study: The HAL SI 643(2)
Multiprocessor
8.7 Performance Parameters and Protocol 645(3)
Performance
8.8 Synchronization 648(4)
8.8.1 Performance of Synchronization 649(2)
Algorithms
8.8.2 Implementing Atomic Primitives 651(1)
8.9 Implications for Parallel Software 652(3)
8.10 Advanced Topics 655(14)
8.10.1 Reducing Directory Storage 655(4)
Overhead
8.10.2 Hierarchical Coherence 659(10)
8.11 Concluding Remarks 669(3)
8.12 Exercises 672(7)
9 Hardware/Software Trade-Offs 679(70)
9.1 Relaxed Memory Consistency Models 681(19)
9.1.1 The System Specification 686(8)
9.1.2 The Programmer's Interface 694(4)
9.1.3 The Translation Mechanism 698(1)
9.1.4 Consistency Models in Real 698(2)
Multiprocessor Systems
9.2 Overcoming Capacity Limitations 700(5)
9.2.1 Tertiary Caches 700(1)
9.2.2 Cache-Only Memory Architectures 701(4)
(COMA)
9.3 Reducing Hardware Cost 705(19)
9.3.1 Hardware Access Control with a 707(1)
Decoupled Assist
9.3.2 Access Control through Code 707(2)
Instrumentation
9.3.3 Page-Based Access Control: Shared 709(12)
Virtual Memory
9.3.4 Access Control through Language 721(3)
and Compiler Support
9.4 Putting It All Together: A Taxonomy 724(5)
and Simple COMA
9.4.1 Putting It All Together: Simple 726(3)
COMA and Stache
9.5 Implications for Parallel Software 729(1)
9.6 Advanced Topics 730(9)
9.6.1 Flexibility and Address 730(2)
Constraints in CC-NUMA Systems
9.6.2 Implementing Relaxed Memory 732(7)
Consistency in Software
9.7 Concluding Remarks 739(1)
9.8 Exercises 740(9)
10 Interconnection Network Design 749(82)
10.1 Basic Definitions 750(5)
10.2 Basic Communication Performance 755(9)
10.2.1 Latency 755(6)
10.2.2 Bandwidth 761(3)
10.3 Organizational Structure 764(4)
10.3.1 Links 764(3)
10.3.2 Switches 767(1)
10.3.3 Network Interfaces 768(1)
10.4 Interconnection Topologies 768(11)
10.4.1 Fully Connected Network 768(1)
10.4.2 Linear Arrays and Rings 769(1)
10.4.3 Multidimensional Meshes and Tori 769(3)
10.4.4 Trees 772(2)
10.4.5 Butterflies 774(4)
10.4.6 Hypercubes 778(1)
10.5 Evaluating Design Trade-Offs in 779(10)
Network Topology
10.5.1 Unloaded Latency 780(5)
10.5.2 Latency under Load 785(4)
10.6 Routing 789(12)
10.6.1 Routing Mechanisms 789(1)
10.6.2 Deterministic Routing 790(1)
10.6.3 Deadlock Freedom 791(4)
10.6.4 Virtual Channels 795(1)
10.6.5 Up-Down Routing 796(1)
10.6.6 Turn-Model Routing 797(2)
10.6.7 Adaptive Routing 799(2)
10.7 Switch Design 801(10)
10.7.1 Ports 802(1)
10.7.2 Internal Datapath 802(2)
10.7.3 Channel Buffers 804(4)
10.7.4 Output Scheduling 808(2)
10.7.5 Stacked Dimension Switches 810(1)
10.8 Flow Control 811(7)
10.8.1 Parallel Computer Networks versus 811(2)
LANs and WANs
10.8.2 Link-Level Flow Control 813(3)
10.8.3 End-to-End Flow Control 816(2)
10.9 Case Studies 818(9)
10.9.1 CRAY T3D Network 818(2)
10.9.2 IBM SP-1, SP-2 Network 820(2)
10.9.3 Scalable Coherent Interface 822(3)
10.9.4 SGI Origin Network 825(1)
10.9.5 Myricom Network 826(1)
10.10 Concluding Remarks 827(1)
10.11 Exercises 828(3)
11 Latency Tolerance 831(104)
11.1 Overview of Latency Tolerance 834(13)
11.1.1 Latency Tolerance and the 836(1)
Communication Pipeline
11.1.2 Approaches 837(3)
11.1.3 Fundamental Requirements, 840(7)
Benefits, and Limitations
11.2 Latency Tolerance in Explicit Message 847(4)
Passing
11.2.1 Structure of Communication 848(1)
11.2.2 Block Data Transfer 848(1)
11.2.3 Precommunication 848(2)
11.2.4 Proceeding Past Communication in 850(1)
the Same Thread
11.2.5 Multithreading 850(1)
11.3 Latency Tolerance in a Shared Address 851(2)
Space
11.3.1 Structure of Communication 852(1)
11.4 Block Data Transfer in a Shared 853(10)
Address Space
11.4.1 Techniques and Mechanisms 853(1)
11.4.2 Policy Issues and Trade-Offs 854(2)
11.4.3 Performance Benefits 856(7)
11.5 Proceeding Past Long-Latency Events 863(14)
11.5.1 Proceeding Past Writes 864(4)
11.5.2 Proceeding Past Reads 868(8)
11.5.3 Summary 876(1)
11.6 Precommunication in a Shared Address 877(19)
Space
11.6.1 Shared Address Space without 877(2)
Caching of Shared Data
11.6.2 Cache-Coherent Shared Address 879(12)
Space
11.6.3 Performance Benefits 891(5)
11.6.4 Summary 896(1)
11.7 Multithreading in a Shared Address 896(26)
Space
11.7.1 Techniques and Mechanisms 898(12)
11.7.2 Performance Benefits 910(4)
11.7.3 Implementation Issues for the 914(3)
Blocked Scheme
11.7.4 Implementation Issues for the 917(3)
Interleaved Scheme
11.7.5 Integrating Multithreading with 920(2)
Multiple-Issue Processors
11.8 Lockup-Free Cache Design 922(4)
11.9 Concluding Remarks 926(1)
11.10 Exercises 927(8)
12 Future Directions 935(28)
12.1 Technology and Architecture 936(19)
12.1.1 Evolutionary Scenario 937(3)
12.1.2 Hitting a Wall 940(4)
12.1.3 Potential Breakthroughs 944(11)
12.2 Applications and System Software 955(8)
12.2.1 Evolutionary Scenario 955(5)
12.2.2 Hitting a Wall 960(1)
12.2.3 Potential Breakthroughs 961(2)
Appendix: Parallel Benchmark Suites 963(6)
A.1 ScaLapack 963(1)
A.2 TPC 963(2)
A.3 SPLASH 965(1)
A.4 NAS Parallel Benchmarks 966(1)
A.5 PARKBENCH 967(1)
A.6 Other Ongoing Efforts 968(1)
References 969(24)
Index 993

Ã¥¼Ò°³

The most exciting development in parallel computer architecture is the convergence of traditionally disparate approaches on a common machine structure. This book explains the forces behind this convergence of shared-memory, message-passing, data parallel, and data-driven computing architectures. It then examines the design issues that are critical to all parallel architecture across the full range of modern design, covering data access, communication performance, coordination of cooperative work, and correct implementation of useful semantics. It not only describes the hardware and software techniques for addressing each of these issues but also explores how these techniques interact in the same system. Examining architecture from an application-driven perspective, it provides comprehensive discussions of parallel programming for high performance and of workload-driven evaluation, based on understanding hardware-software interactions. * synthesizes a decade of research and development for practicing engineers, graduate students, and researchers in parallel computer architecture, system software, and applications development * presents in-depth application case studies from computer graphics, computational science and engineering, and data mining to demonstrate sound quantitative evaluation of design trade-offs * describes the process of programming for performance, including both the architecture-independent and architecture-dependent aspects, with examples and case-studies * illustrates bus-based and network-based parallel systems with case studies of more than a dozen important commercial designs

ÀúÀÚ¼Ò°³

»ý³â¿ùÀÏ -

ÇØ´çÀÛ°¡¿¡ ´ëÇÑ ¼Ò°³°¡ ¾ø½À´Ï´Ù.

ÄÄÇ»ÅÍ ºÐ¾ß¿¡¼­ ¸¹Àº ȸ¿øÀÌ ±¸¸ÅÇÑ Ã¥

    ¸®ºä

    0.0 (ÃÑ 0°Ç)

    100ÀÚÆò

    ÀÛ¼º½Ã À¯ÀÇ»çÇ×

    ÆòÁ¡
    0/100ÀÚ
    µî·ÏÇϱâ

    100ÀÚÆò

    0.0
    (ÃÑ 0°Ç)

    ÆǸÅÀÚÁ¤º¸

    • ÀÎÅÍÆÄÅ©µµ¼­¿¡ µî·ÏµÈ ¿ÀǸ¶ÄÏ »óÇ°Àº ±× ³»¿ë°ú Ã¥ÀÓÀÌ ¸ðµÎ ÆǸÅÀÚ¿¡°Ô ÀÖÀ¸¸ç, ÀÎÅÍÆÄÅ©µµ¼­´Â ÇØ´ç »óÇ°°ú ³»¿ë¿¡ ´ëÇØ Ã¥ÀÓÁöÁö ¾Ê½À´Ï´Ù.

    »óÈ£

    (ÁÖ)±³º¸¹®°í

    ´ëÇ¥ÀÚ¸í

    ¾Èº´Çö

    »ç¾÷ÀÚµî·Ï¹øÈ£

    102-81-11670

    ¿¬¶ôó

    1544-1900

    ÀüÀÚ¿ìÆíÁÖ¼Ò

    callcenter@kyobobook.co.kr

    Åë½ÅÆǸž÷½Å°í¹øÈ£

    01-0653

    ¿µ¾÷¼ÒÀçÁö

    ¼­¿ïƯº°½Ã Á¾·Î±¸ Á¾·Î 1(Á¾·Î1°¡,±³º¸ºôµù)

    ±³È¯/ȯºÒ

    ¹ÝÇ°/±³È¯ ¹æ¹ý

    ¡®¸¶ÀÌÆäÀÌÁö > Ãë¼Ò/¹ÝÇ°/±³È¯/ȯºÒ¡¯ ¿¡¼­ ½Åû ¶Ç´Â 1:1 ¹®ÀÇ °Ô½ÃÆÇ ¹× °í°´¼¾ÅÍ(1577-2555)¿¡¼­ ½Åû °¡´É

    ¹ÝÇ°/±³È¯°¡´É ±â°£

    º¯½É ¹ÝÇ°ÀÇ °æ¿ì Ãâ°í¿Ï·á ÈÄ 6ÀÏ(¿µ¾÷ÀÏ ±âÁØ) À̳»±îÁö¸¸ °¡´É
    ´Ü, »óÇ°ÀÇ °áÇÔ ¹× °è¾à³»¿ë°ú ´Ù¸¦ °æ¿ì ¹®Á¦Á¡ ¹ß°ß ÈÄ 30ÀÏ À̳»

    ¹ÝÇ°/±³È¯ ºñ¿ë

    º¯½É ȤÀº ±¸¸ÅÂø¿À·Î ÀÎÇÑ ¹ÝÇ°/±³È¯Àº ¹Ý¼Û·á °í°´ ºÎ´ã
    »óÇ°À̳ª ¼­ºñ½º ÀÚüÀÇ ÇÏÀÚ·Î ÀÎÇÑ ±³È¯/¹ÝÇ°Àº ¹Ý¼Û·á ÆǸÅÀÚ ºÎ´ã

    ¹ÝÇ°/±³È¯ ºÒ°¡ »çÀ¯

    ·¼ÒºñÀÚÀÇ Ã¥ÀÓ ÀÖ´Â »çÀ¯·Î »óÇ° µîÀÌ ¼Õ½Ç ¶Ç´Â ÈÑ¼ÕµÈ °æ¿ì
    (´ÜÁö È®ÀÎÀ» À§ÇÑ Æ÷Àå ÈѼÕÀº Á¦¿Ü)

    ·¼ÒºñÀÚÀÇ »ç¿ë, Æ÷Àå °³ºÀ¿¡ ÀÇÇØ »óÇ° µîÀÇ °¡Ä¡°¡ ÇöÀúÈ÷ °¨¼ÒÇÑ °æ¿ì
    ¿¹) È­ÀåÇ°, ½ÄÇ°, °¡ÀüÁ¦Ç°(¾Ç¼¼¼­¸® Æ÷ÇÔ) µî

    ·º¹Á¦°¡ °¡´ÉÇÑ »óÇ° µîÀÇ Æ÷ÀåÀ» ÈѼÕÇÑ °æ¿ì
    ¿¹) À½¹Ý/DVD/ºñµð¿À, ¼ÒÇÁÆ®¿þ¾î, ¸¸È­Ã¥, ÀâÁö, ¿µ»ó È­º¸Áý

    ·½Ã°£ÀÇ °æ°ú¿¡ ÀÇÇØ ÀçÆǸŰ¡ °ï¶õÇÑ Á¤µµ·Î °¡Ä¡°¡ ÇöÀúÈ÷ °¨¼ÒÇÑ °æ¿ì

    ·ÀüÀÚ»ó°Å·¡ µî¿¡¼­ÀÇ ¼ÒºñÀÚº¸È£¿¡ °üÇÑ ¹ý·üÀÌ Á¤ÇÏ´Â ¼ÒºñÀÚ Ã»¾àöȸ Á¦ÇÑ ³»¿ë¿¡ ÇØ´çµÇ´Â °æ¿ì

    »óÇ° Ç°Àý

    °ø±Þ»ç(ÃâÆÇ»ç) Àç°í »çÁ¤¿¡ ÀÇÇØ Ç°Àý/Áö¿¬µÉ ¼ö ÀÖÀ½

    ¼ÒºñÀÚ ÇÇÇغ¸»ó
    ȯºÒÁö¿¬¿¡ µû¸¥ ¹è»ó

    ·»óÇ°ÀÇ ºÒ·®¿¡ ÀÇÇÑ ±³È¯, A/S, ȯºÒ, Ç°Áúº¸Áõ ¹× ÇÇÇغ¸»ó µî¿¡ °üÇÑ »çÇ×Àº ¼ÒºñÀÚºÐÀïÇØ°á ±âÁØ (°øÁ¤°Å·¡À§¿øȸ °í½Ã)¿¡ ÁØÇÏ¿© 󸮵Ê

    ·´ë±Ý ȯºÒ ¹× ȯºÒÁö¿¬¿¡ µû¸¥ ¹è»ó±Ý Áö±Þ Á¶°Ç, ÀýÂ÷ µîÀº ÀüÀÚ»ó°Å·¡ µî¿¡¼­ÀÇ ¼ÒºñÀÚ º¸È£¿¡ °üÇÑ ¹ý·ü¿¡ µû¶ó ó¸®ÇÔ

    (ÁÖ)KGÀ̴Ͻýº ±¸¸Å¾ÈÀü¼­ºñ½º¼­ºñ½º °¡ÀÔ»ç½Ç È®ÀÎ

    (ÁÖ)ÀÎÅÍÆÄÅ©Ä¿¸Ó½º´Â ȸ¿ø´ÔµéÀÇ ¾ÈÀü°Å·¡¸¦ À§ÇØ ±¸¸Å±Ý¾×, °áÁ¦¼ö´Ü¿¡ »ó°ü¾øÀÌ (ÁÖ)ÀÎÅÍÆÄÅ©Ä¿¸Ó½º¸¦ ÅëÇÑ ¸ðµç °Å·¡¿¡ ´ëÇÏ¿©
    (ÁÖ)KGÀ̴Ͻýº°¡ Á¦°øÇÏ´Â ±¸¸Å¾ÈÀü¼­ºñ½º¸¦ Àû¿ëÇÏ°í ÀÖ½À´Ï´Ù.

    ¹è¼Û¾È³»

    • ±³º¸¹®°í »óÇ°Àº Åùè·Î ¹è¼ÛµÇ¸ç, Ãâ°í¿Ï·á 1~2Àϳ» »óÇ°À» ¹Þ¾Æ º¸½Ç ¼ö ÀÖ½À´Ï´Ù.

    • Ãâ°í°¡´É ½Ã°£ÀÌ ¼­·Î ´Ù¸¥ »óÇ°À» ÇÔ²² ÁÖ¹®ÇÒ °æ¿ì Ãâ°í°¡´É ½Ã°£ÀÌ °¡Àå ±ä »óÇ°À» ±âÁØÀ¸·Î ¹è¼ÛµË´Ï´Ù.

    • ±ººÎ´ë, ±³µµ¼Ò µî ƯÁ¤±â°üÀº ¿ìü±¹ Åù踸 ¹è¼Û°¡´ÉÇÕ´Ï´Ù.

    • ¹è¼Ûºñ´Â ¾÷ü ¹è¼Ûºñ Á¤Ã¥¿¡ µû¸¨´Ï´Ù.

    • - µµ¼­ ±¸¸Å ½Ã 15,000¿ø ÀÌ»ó ¹«·á¹è¼Û, 15,000¿ø ¹Ì¸¸ 2,500¿ø - »óÇ°º° ¹è¼Ûºñ°¡ ÀÖ´Â °æ¿ì, »óÇ°º° ¹è¼Ûºñ Á¤Ã¥ Àû¿ë