°£Æí°áÁ¦, ½Å¿ëÄ«µå û±¸ÇÒÀÎ
ÀÎÅÍÆÄÅ© ·Ôµ¥Ä«µå 5% (76,000¿ø)
(ÃÖ´ëÇÒÀÎ 10¸¸¿ø / Àü¿ù½ÇÀû 40¸¸¿ø)
ºÏÇǴϾð ·Ôµ¥Ä«µå 30% (56,000¿ø)
(ÃÖ´ëÇÒÀÎ 3¸¸¿ø / 3¸¸¿ø ÀÌ»ó °áÁ¦)
NH¼îÇÎ&ÀÎÅÍÆÄÅ©Ä«µå 20% (64,000¿ø)
(ÃÖ´ëÇÒÀÎ 4¸¸¿ø / 2¸¸¿ø ÀÌ»ó °áÁ¦)
Close

Engineering Optimization [¾çÀå]

¼Òµæ°øÁ¦

2013³â 9¿ù 9ÀÏ ÀÌÈÄ ´©Àû¼öÄ¡ÀÔ´Ï´Ù.

ÆǸÅÁö¼ö 17
?
ÆǸÅÁö¼ö¶õ?
»çÀÌÆ®ÀÇ ÆǸŷ®¿¡ ±â¹ÝÇÏ¿© ÆǸŷ® ÃßÀ̸¦ ¹Ý¿µÇÑ ÀÎÅÍÆÄÅ© µµ¼­¿¡¼­ÀÇ µ¶¸³ÀûÀÎ ÆǸŠÁö¼öÀÔ´Ï´Ù. ÇöÀç °¡Àå Àß Æȸ®´Â »óÇ°¿¡ °¡ÁßÄ¡¸¦ µÎ¾ú±â ¶§¹®¿¡ ½ÇÁ¦ ´©Àû ÆǸŷ®°ú´Â ´Ù¼Ò Â÷ÀÌ°¡ ÀÖÀ» ¼ö ÀÖ½À´Ï´Ù. ÆǸŷ® ¿Ü¿¡µµ ´Ù¾çÇÑ °¡ÁßÄ¡·Î ±¸¼ºµÇ¾î ÃÖ±ÙÀÇ À̽´µµ¼­ È®Àνà À¯¿ëÇÒ ¼ö ÀÖ½À´Ï´Ù. ÇØ´ç Áö¼ö´Â ¸ÅÀÏ °»½ÅµË´Ï´Ù.
Close
°øÀ¯Çϱâ
Á¤°¡

80,000¿ø

  • 80,000¿ø

    2,400P (3%Àû¸³)

ÇÒÀÎÇýÅÃ
Àû¸³ÇýÅÃ
  • S-Point Àû¸³Àº ¸¶ÀÌÆäÀÌÁö¿¡¼­ Á÷Á¢ ±¸¸ÅÈ®Á¤ÇϽŠ°æ¿ì¸¸ Àû¸³ µË´Ï´Ù.
Ãß°¡ÇýÅÃ
¹è¼ÛÁ¤º¸
  • 5/13(¿ù) À̳» ¹ß¼Û ¿¹Á¤  (¼­¿ï½Ã °­³²±¸ »ï¼º·Î 512)
  • ¹«·á¹è¼Û
ÁÖ¹®¼ö·®
°¨¼Ò Áõ°¡
  • À̺¥Æ®/±âȹÀü

  • ¿¬°üµµ¼­

  • »óÇ°±Ç

AD

¸ñÂ÷

Contents
Preface xix
Acknowledgments xxvii
Nomenclature xxix
About the Companion Website xxxvii

Section 1 Introductory Concepts 1

1 Optimization: Introduction and Concepts 3
1.1 Optimization and Terminology 3
1.2 Optimization Concepts and Definitions 4
1.3 Examples 6
1.4 Terminology Continued 10
1.4.1 Constraint 10
1.4.2 Feasible Solutions 10
1.4.3 Minimize or Maximize 11
1.4.4 Canonical Form of the Optimization Statement 11
1.5 Optimization Procedure 12
1.6 Issues That Shape Optimization Procedures 16
1.7 Opposing Trends 17
1.8 Uncertainty 20
1.9 Over- and Under-specification in Linear Equations 21
1.10 Over- and Under-specification in Optimization 22
1.11 Test Functions 23
1.12 Significant Dates in Optimization 23
1.13 Iterative Procedures 26
1.14 Takeaway 27
1.15 Exercises 27

2 Optimization Application Diversity and Complexity 33
2.1 Optimization 33
2.2 Nonlinearity 33
2.3 Min, Max, Min?Max, Max?Min, ¡¦ 34
2.4 Integers and Other Discretization 35
2.5 Conditionals and Discontinuities: Cliffs Ridges/Valleys 36
2.6 Procedures, Not Equations 37
2.7 Static and Dynamic Models 38
2.8 Path Integrals 38
2.9 Economic Optimization and Other Nonadditive Cost Functions 38
2.10 Reliability 39
2.11 Regression 40
2.12 Deterministic and Stochastic 42
2.13 Experimental w.r.t. Modeled OF 43
2.14 Single and Multiple Optima 44
2.15 Saddle Points 45
2.16 Inflections 46
2.17 Continuum and Discontinuous DVs 47
2.18 Continuum and Discontinuous Models 47
2.19 Constraints and Penalty Functions 48
2.20 Ranks and Categorization: Discontinuous OFs 50
2.21 Underspecified OFs 51
2.22 Takeaway 51
2.23 Exercises 51

3 Validation: Knowing That the Answer Is Right 53
3.1 Introduction 53
3.2 Validation 53
3.3 Advice on Becoming Proficient 55
3.4 Takeaway 56
3.5 Exercises 57

Section 2 Univariate Search Techniques 59
4 Univariate (Single DV) Search Techniques 61
4.1 Univariate (Single DV) 61
4.2 Analytical Method of Optimization 62
4.2.1 Issues with the Analytical Approach 63
4.3 Numerical Iterative Procedures 64
4.3.1 Newton¡¯s Methods 64
4.3.2 Successive Quadratic (A Surrogate Model or Approximating Model Method) 68
4.4 Direct Search Approaches 70
4.4.1 Bisection Method 70
4.4.2 Golden Section Method 72
4.4.3 Perspective at This Point 74
4.4.4 Heuristic Direct Search 74
4.4.5 Leapfrogging 76
4.4.6 LF for Stochastic Functions 79
4.5 Perspectives on Univariate Search Methods 82
4.6 Evaluating Optimizers 85
4.7 Summary of Techniques 85
4.7.1 Analytical Method 86
4.7.2 Newton¡¯s (and Variants Like Secant) 86
4.7.3 Successive Quadratic 86
4.7.4 Golden Section Method 86
4.7.5 Heuristic Direct 87
4.7.6 Leapfrogging 87
4.8 Takeaway 87
4.9 Exercises 88

5 Path Analysis 93
5.1 Introduction 93
5.2 Path Examples 93
5.3 Perspective About Variables 96
5.4 Path Distance Integral 97
5.5 Accumulation along a Path 99
5.6 Slope along a Path 101
5.7 Parametric Path Notation 103
5.8 Takeaway 104
5.9 Exercises 104

6 Stopping and Convergence Criteria: 1-D Applications 107
6.1 Stopping versus Convergence Criteria 107
6.2 Determining Convergence 107
6.2.1 Threshold on the OF 108
6.2.2 Threshold on the Change in the OF 108
6.2.3 Threshold on the Change in the DV 108
6.2.4 Threshold on the Relative Change in the DV 109
6.2.5 Threshold on the Relative Change in the OF 109
6.2.6 Threshold on the Impact of the DV on the OF 109
6.2.7 Convergence Based on Uncertainty Caused by the Givens 109
6.2.8 Multiplayer Range 110
6.2.9 Steady-State Convergence 110
6.3 Combinations of Convergence Criteria 111
6.4 Choosing Convergence Threshold Values 112
6.5 Precision 112
6.6 Other Convergence Criteria 113
6.7 Stopping Criteria to End a Futile Search 113
6.7.1 N Iteration Threshold 114
6.7.2 Execution Error 114
6.7.3 Constraint Violation 114
6.8 Choices! 114
6.9 Takeaway 114
6.10 Exercises 115

Section 3 Multivariate Search Techniques 117
7 Multidimension Application Introduction and the Gradient 119
7.1 Introduction 119
7.2 Illustration of Surface and Terms 122
7.3 Some Surface Analysis 123
7.4 Parametric Notation 128
7.5 Extension to Higher Dimension 130
7.6 Takeaway 131
7.7 Exercises 131

8 Elementary Gradient-Based Optimizers: CSLS and ISD 135
8.1 Introduction 135
8.2 Cauchy¡¯s Sequential Line Search 135
8.2.1 CSLS with Successive Quadratic 137
8.2.2 CSLS with Newton/Secant 138
8.2.3 CSLS with Golden Section 138
8.2.4 CSLS with Leapfrogging 138
8.2.5 CSLS with Heuristic Direct Search 139
8.2.6 CSLS Commentary 139
8.2.7 CSLS Pseudocode 140
8.2.8 VBA Code for a 2-DV Application 141
8.3 Incremental Steepest Descent 144
8.3.1 Pseudocode for the ISD Method 144
8.3.2 Enhanced ISD 145
8.3.3 ISD Code 148
8.4 Takeaway 149
8.5 Exercises 149

9 Second-Order Model-Based Optimizers: SQ and NR 155
9.1 Introduction 155
9.2 Successive Quadratic 155
9.2.1 Multivariable SQ 156
9.2.2 SQ Pseudocode 159
9.3 Newton?Raphson 159
9.3.1 NR Pseudocode 162
9.3.2 Attenuate NR 163
9.3.3 Quasi-Newton 166
9.4 Perspective on CSLS, ISD, SQ, and NR 168
9.5 Choosing Step Size for Numerical Estimate of Derivatives 169
9.6 Takeaway 170
9.7 Exercises 170

10 Gradient-Based Optimizer Solutions: LM, RLM, CG, BFGS, RG, and GRG 173
10.1 Introduction 173
10.2 Levenberg?Marquardt (LM) 173
10.2.1 LM VBA Code for a 2-DV Case 175
10.2.2 Modified LM (RLM) 176
10.2.3 RLM Pseudocode 177
10.2.4 RLM VBA Code for a 2-DV Case 178
10.3 Scaled Variables 180
10.4 Conjugate Gradient (CG) 182
10.5 Broyden-Fletcher-Goldfarb-Shanno (BFGS) 183
10.6 Generalized Reduced Gradient (GRG) 184
10.7 Takeaway 186
10.8 Exercises 186

11 Direct Search Techniques 187
11.1 Introduction 187
11.2 Cyclic Heuristic Direct (CHD) Search 188
11.2.1 CHD Pseudocode 188
11.2.2 CHD VBA Code 189
11.3 Hooke?Jeeves (HJ) 192
11.3.1 HJ Code in VBA 195
11.4 Compare and Contrast CHD and HJ Features: A Summary 197
11.5 Nelder?Mead (NM) Simplex: Spendley, Hext, and Himsworth 199
11.6 Multiplayer Direct Search Algorithms 200
11.7 Leapfrogging 201
11.7.1 Convergence Criteria 208
11.7.2 Stochastic Surfaces 209
11.7.3 Summary 209
11.8 Particle Swarm Optimization 209
11.8.1 Individual Particle Behavior 210
11.8.2 Particle Swarm 213
11.8.3 PSO Equation Analysis 215
11.9 Complex Method (CM) 216
11.10 A Brief Comparison 217
11.11 Takeaway 218
11.12 Exercises 219

12 Linear Programming 223
12.1 Introduction 223
12.2 Visual Representation and Concepts 225
12.3 Basic LP Procedure 228
12.4 Canonical LP Statement 228
12.5 LP Algorithm 229
12.6 Simplex Tableau 230
12.7 Takeaway 231
12.8 Exercises 231

13 Dynamic Programming 233
13.1 Introduction 233
13.2 Conditions 236
13.3 DP Concept 237
13.4 Some Calculation Tips 240
13.5 Takeaway 241
13.6 Exercises 241

14 Genetic Algorithms and Evolutionary Computation 243
14.1 Introduction 243
14.2 GA Procedures 243
14.3 Fitness of Selection 245
14.4 Takeaway 250
14.5 Exercises 250

15 Intuitive Optimization 253
15.1 Introduction 253
15.2 Levels 254
15.3 Takeaway 254
15.4 Exercises 254

16 Surface Analysis II 257
16.1 Introduction 257
16.2 Maximize Is Equivalent to Minimize the Negative 257
16.3 Scaling by a Positive Number Does Not Change DV? 258
16.4 Scaled and Translated OFs Do Not Change DV? 258
16.5 Monotonic Function Transformation Does Not Change DV? 258
16.6 Impact on Search Path or NOFE 261
16.7 Inequality Constraints 263
16.8 Transforming DVs 263
16.9 Takeaway 263
16.10 Exercises 263

17 Convergence Criteria 2: N-D Applications 265
17.1 Introduction 265
17.2 Defining an Iteration 265
17.3 Criteria for Single TS Deterministic Procedures 266
17.4 Criteria for Multiplayer Deterministic Procedures 267
17.5 Stochastic Applications 268
17.6 Miscellaneous Observations 268
17.7 Takeaway 269
17.8 Exercises 269

18 Enhancements to Optimizers 271
18.1 Introduction 271
18.2 Criteria for Replicate Trials 271
18.3 Quasi-Newton 274
18.4 Coarse?Fine Sequence 275
18.5 Number of Players 275
18.6 Search Range Adjustment 276
18.7 Adjustment of Optimizer Coefficient Values or Options in Process 276
18.8 Initialization Range 277
18.9 OF and DV Transformations 277
18.10 Takeaway 278
18.11 Exercises 278

Section 4 Developing Your Application Statements 279
19 Scaled Variables and Dimensional Consistency 281
19.1 Introduction 281
19.2 A Scaled Variable Approach 283
19.3 Sampling of Issues with Primitive Variables 283
19.4 Linear Scaling Options 285
19.5 Nonlinear Scaling 286
19.6 Takeaway 287
19.7 Exercises 287

20 Economic Optimization 289
20.1 Introduction 289
20.2 Annual Cash Flow 290
20.3 Including Risk as an Annual Expense 291
20.4 Capital 293
20.5 Combining Capital and Nominal Annual Cash Flow 293
20.6 Combining Time Value and Schedule of Capital and Annual Cash Flow 296
20.7 Present Value 297
20.8 Including Uncertainty 298
20.8.1 Uncertainty Models 301
20.8.2 Methods to Include Uncertainty in an Optimization 303
20.9 Takeaway 304
20.10 Exercises 304

21 Multiple OF and Constraint Applications 305
21.1 Introduction 305
21.2 Solution 1: Additive Combinations of the Functions 306
21.2.1 Solution 1a: Classic Weighting Factors 307
21.2.2 Solution 1b: Equal Concern Weighting 307
21.2.3 Solution 1c: Nonlinear Weighting 309
21.3 Solution 2: Nonadditive OF Combinations 311
21.4 Solution 3: Pareto Optimal 311
21.5 Takeaway 316
21.6 Exercises 316

22 Constraints 319
22.1 Introduction 319
22.2 Equality Constraints 320
22.2.1 Explicit Equality Constraints 320
22.2.2 Implicit Equality Constraints 321
22.3 Inequality Constraints 321
22.3.1 Penalty Function: Discontinuous 323
22.3.2 Penalty Function: Soft Constraint 323
22.3.3 Inequality Constraints: Slack and Surplus Variables 325
22.4 Constraints: Pass/Fail Categories 329
22.5 Hard Constraints Can Block Progress 330
22.6 Advice 331
22.7 Constraint-Equivalent Features 332
22.8 Takeaway 332
22.9 Exercises 332

23 Multiple Optima 335
23.1 Introduction 335
23.2 Solution: Multiple Starts 337
23.2.1 A Priori Method 340
23.2.2 A Posteriori Method 342
23.2.3 Snyman and Fatti Criterion A Posteriori Method 345
23.3 Other Options 348
23.4 Takeaway 349
23.5 Exercises 350

24 Stochastic Objective Functions 353
24.1 Introduction 353
24.2 Method Summary for Optimizing Stochastic Functions 356
24.2.1 Step 1: Replicate the Apparent Best Player 356
24.2.2 Step 2: Steady-State Detection 357
24.3 What Value to Report? 358
24.4 Application Examples 359
24.4.1 GMC Control of Hot and Cold Mixing 359
24.4.2 MBC of Hot and Cold Mixing 359
24.4.3 Batch Reaction Management 359
24.4.4 Reservoir and Stochastic Boot Print 361
24.4.5 Optimization Results 362
24.5 Takeaway 365
24.6 Exercises 365

25 Effects of Uncertainty 367
25.1 Introduction 367
25.2 Sources of Error and Uncertainty 368
25.3 Significant Digits 370
25.4 Estimating Uncertainty on Values 371
25.5 Propagating Uncertainty on DV Values 372
25.5.1 Analytical Method 373
25.5.2 Numerical Method 375
25.6 Implicit Relations 378
25.7 Estimating Uncertainty in DV? and OF? 378
25.8 Takeaway 379
25.9 Exercises 379
26 Optimization of Probable Outcomes and Distribution Characteristics 381
26.1 Introduction 381
26.2 The Concept of Modeling Uncertainty 385
26.3 Stochastic Approach 387
26.4 Takeaway 389
26.5 Exercises 389

27 Discrete and Integer Variables 391
27.1 Introduction 391
27.2 Optimization Solutions 394
27.2.1 Exhaustive Search 394
27.2.2 Branch and Bound 394
27.2.3 Cyclic Heuristic 394
27.2.4 Leapfrogging or Other Multiplayer Search 395
27.3 Convergence 395
27.4 Takeaway 395
27.5 Exercises 395

28 Class Variables 397
28.1 Introduction 397
28.2 The Random Keys Method: Sequence 398
28.3 The Random Keys Method: Dichotomous Variables 400
28.4 Comments 401
28.5 Takeaway 401
28.6 Exercises 401

29 Regression 403
29.1 Introduction 403
29.2 Perspective 404
29.3 Least Squares Regression: Traditional View on Linear Model Parameters 404
29.4 Models Nonlinear in DV 405
29.4.1 Models with a Delay 407
29.5 Maximum Likelihood 408
29.5.1 Akaho¡¯s Method 411
29.6 Convergence Criterion 416
29.7 Model Order or Complexity 421
29.8 Bootstrapping to Reveal Model Uncertainty 425
29.8.1 Interpretation of Bootstrapping Analysis 428
29.8.2 Appropriating Bootstrapping 430
29.9 Perspective 431
29.10 Takeaway 431
29.11 Exercises 432

Section 5 Perspective on Many Topics 441
30 Perspective 443
30.1 Introduction 443
30.2 Classifications 443
30.3 Elements Associated with Optimization 445
30.4 Root Finding Is Not Optimization 446
30.5 Desired Engineering Attributes 446
30.6 Overview of Optimizers and Attributes 447
30.6.1 Gradient Based: Cauchy Sequential Line Search, Incremental Steepest Descent, GRG, Etc. 447
30.6.2 Local Surface Characterization Based: Newton?Raphson, Levenberg?Marquardt, Successive Quadratic, RLM, Quasi-Newton, Etc. 448
30.6.3 Direct Search with Single Trial Solution: Cyclic Heuristic, Hooke?Jeeves, and Nelder?Mead 448
30.6.4 Multiplayer Direct Search Optimizers: Leapfrogging, Particle Swarm, and Genetic Algorithms 448
30.7 Choices 448
30.8 Variable Classifications 449
30.8.1 Nominal 449
30.8.2 Ordinal 450
30.8.3 Cardinal 450
30.9 Constraints 451
30.10 Takeaway 453
30.11 Exercises 453

31 Response Surface Aberrations 459
31.1 Introduction 459
31.2 Cliffs (Vertical Walls) 459
31.3 Sharp Valleys (or Ridges) 459
31.4 Striations 463
31.5 Level Spots (Functions 1, 27, 73, 84) 463
31.6 Hard-to-Find Optimum 466
31.7 Infeasible Calculations 468
31.8 Uniform Minimum 468
31.9 Noise: Stochastic Response 469
31.10 Multiple Optima 471
31.11 Takeaway 473
31.12 Exercises 473

32 Identifying the Models, OF, DV, Convergence Criteria, and Constraints 475
32.1 Introduction 475
32.2 Evaluate the Results 476
32.3 Takeaway 482
32.4 Exercises 482

33 Evaluating Optimizers 489
33.1 Introduction 489
33.2 Challenges to Optimizers 490
33.3 Stakeholders 490
33.4 Metrics of Optimizer Performance 490
33.5 Designing an Experimental Test 492
33.6 Takeaway 495
33.7 Exercises 496

34 Troubleshooting Optimizers 499
34.1 Introduction 499
34.2 DV Values Do Not Change 499
34.3 Multiple DV* Values for the Same OF? Value 499
34.4 EXE Error 500
34.5 Extreme Values 500
34.6 DV* Is Dependent on Convergence Threshold 500
34.7 OF* Is Irreproducible 501
34.8 Concern over Results 501
34.9 CDF Features 501
34.10 Parameter Correlation 502
34.11 Multiple Equivalent Solutions 504
34.12 Takeaway 504
34.13 Exercises 504

Section 6 Analysis of Leapfrogging Optimization 505
35 Analysis of Leapfrogging 507
35.1 Introduction 507
35.2 Balance in an Optimizer 508
35.3 Number of Initializations to be Confident That the Best Will Draw All Others to the Global Optimum 510
35.3.1 Methodology 511
35.3.2 Experimental 512
35.3.3 Results 513
35.4 Leap-To Window Amplification Analysis 515
35.5 Analysis of ¥á and M to Prevent Convergence on the Side of a Hill 519
35.6 Analysis of ¥á and M to Minimize NOFE 521
35.7 Probability Distribution of Leap-Overs 522
35.7.1 Data 526
35.8 Takeaway 527
35.9 Exercises 528

Section 7 Case Studies 529
36 Case Study 1: Economic Optimization of a Pipe System 531
36.1 Process and Analysis 531
36.1.1 Deterministic Continuum Model 531
36.1.2 Deterministic Discontinuous Model 534
36.1.3 Stochastic Discontinuous Model 535
36.2 Exercises 536

37 Case Study 2: Queuing Study 539
37.1 The Process and Analysis 539
37.2 Exercises 541

38 Case Study 3: Retirement Study 543
38.1 The Process and Analysis 543
38.2 Exercises 550

39 Case Study 4: A Goddard Rocket Study 551
39.1 The Process and Analysis 551
39.2 Pre-Assignment Note 554
39.3 Exercises 555

40 Case Study 5: Reservoir 557
40.1 The Process and Analysis 557
40.2 Exercises 559

41 Case Study 6: Area Coverage 561
41.1 Description and Analysis 561
41.2 Exercises 562

42 Case Study 7: Approximating Series Solution to an ODE 565
42.1 Concepts and Analysis 565
42.2 Exercises 568

43 Case Study 8: Horizontal Tank Vapor?Liquid Separator 571
43.1 Description and Analysis 571
43.2 Exercises 576

44 Case Study 9: In Vitro Fertilization 579
44.1 Description and Analysis 579
44.2 Exercises 583

45 Case Study 10: Data Reconciliation 585
45.1 Description and Analysis 585
45.2 Exercises 588

Section 8 Appendices 591
Appendix A Mathematical Concepts and Procedures 593
Appendix B Root Finding 605
Appendix C Gaussian Elimination 611
Appendix D Steady-State Identification in Noisy Signals 621
Appendix E Optimization Challenge Problems (2-D and Single OF) 635
Appendix F Brief on VBA Programming: Excel in Office 2013 709

Section 9 References and Index 717
References and Additional Resources 719
Index 723

ÀúÀÚ¼Ò°³

R. Russell Rhinehart [Àú] ½ÅÀ۾˸² SMS½Åû
»ý³â¿ùÀÏ -

ÇØ´çÀÛ°¡¿¡ ´ëÇÑ ¼Ò°³°¡ ¾ø½À´Ï´Ù.

ÄÄÇ»ÅÍ ºÐ¾ß¿¡¼­ ¸¹Àº ȸ¿øÀÌ ±¸¸ÅÇÑ Ã¥

    ¸®ºä

    0.0 (ÃÑ 0°Ç)

    100ÀÚÆò

    ÀÛ¼º½Ã À¯ÀÇ»çÇ×

    ÆòÁ¡
    0/100ÀÚ
    µî·ÏÇϱâ

    100ÀÚÆò

    0.0
    (ÃÑ 0°Ç)

    ÆǸÅÀÚÁ¤º¸

    • ÀÎÅÍÆÄÅ©µµ¼­¿¡ µî·ÏµÈ ¿ÀǸ¶ÄÏ »óÇ°Àº ±× ³»¿ë°ú Ã¥ÀÓÀÌ ¸ðµÎ ÆǸÅÀÚ¿¡°Ô ÀÖÀ¸¸ç, ÀÎÅÍÆÄÅ©µµ¼­´Â ÇØ´ç »óÇ°°ú ³»¿ë¿¡ ´ëÇØ Ã¥ÀÓÁöÁö ¾Ê½À´Ï´Ù.

    »óÈ£

    (ÁÖ)±³º¸¹®°í

    ´ëÇ¥ÀÚ¸í

    ¾Èº´Çö

    »ç¾÷ÀÚµî·Ï¹øÈ£

    102-81-11670

    ¿¬¶ôó

    1544-1900

    ÀüÀÚ¿ìÆíÁÖ¼Ò

    callcenter@kyobobook.co.kr

    Åë½ÅÆǸž÷½Å°í¹øÈ£

    01-0653

    ¿µ¾÷¼ÒÀçÁö

    ¼­¿ïƯº°½Ã Á¾·Î±¸ Á¾·Î 1(Á¾·Î1°¡,±³º¸ºôµù)

    ±³È¯/ȯºÒ

    ¹ÝÇ°/±³È¯ ¹æ¹ý

    ¡®¸¶ÀÌÆäÀÌÁö > Ãë¼Ò/¹ÝÇ°/±³È¯/ȯºÒ¡¯ ¿¡¼­ ½Åû ¶Ç´Â 1:1 ¹®ÀÇ °Ô½ÃÆÇ ¹× °í°´¼¾ÅÍ(1577-2555)¿¡¼­ ½Åû °¡´É

    ¹ÝÇ°/±³È¯°¡´É ±â°£

    º¯½É ¹ÝÇ°ÀÇ °æ¿ì Ãâ°í¿Ï·á ÈÄ 6ÀÏ(¿µ¾÷ÀÏ ±âÁØ) À̳»±îÁö¸¸ °¡´É
    ´Ü, »óÇ°ÀÇ °áÇÔ ¹× °è¾à³»¿ë°ú ´Ù¸¦ °æ¿ì ¹®Á¦Á¡ ¹ß°ß ÈÄ 30ÀÏ À̳»

    ¹ÝÇ°/±³È¯ ºñ¿ë

    º¯½É ȤÀº ±¸¸ÅÂø¿À·Î ÀÎÇÑ ¹ÝÇ°/±³È¯Àº ¹Ý¼Û·á °í°´ ºÎ´ã
    »óÇ°À̳ª ¼­ºñ½º ÀÚüÀÇ ÇÏÀÚ·Î ÀÎÇÑ ±³È¯/¹ÝÇ°Àº ¹Ý¼Û·á ÆǸÅÀÚ ºÎ´ã

    ¹ÝÇ°/±³È¯ ºÒ°¡ »çÀ¯

    ·¼ÒºñÀÚÀÇ Ã¥ÀÓ ÀÖ´Â »çÀ¯·Î »óÇ° µîÀÌ ¼Õ½Ç ¶Ç´Â ÈÑ¼ÕµÈ °æ¿ì
    (´ÜÁö È®ÀÎÀ» À§ÇÑ Æ÷Àå ÈѼÕÀº Á¦¿Ü)

    ·¼ÒºñÀÚÀÇ »ç¿ë, Æ÷Àå °³ºÀ¿¡ ÀÇÇØ »óÇ° µîÀÇ °¡Ä¡°¡ ÇöÀúÈ÷ °¨¼ÒÇÑ °æ¿ì
    ¿¹) È­ÀåÇ°, ½ÄÇ°, °¡ÀüÁ¦Ç°(¾Ç¼¼¼­¸® Æ÷ÇÔ) µî

    ·º¹Á¦°¡ °¡´ÉÇÑ »óÇ° µîÀÇ Æ÷ÀåÀ» ÈѼÕÇÑ °æ¿ì
    ¿¹) À½¹Ý/DVD/ºñµð¿À, ¼ÒÇÁÆ®¿þ¾î, ¸¸È­Ã¥, ÀâÁö, ¿µ»ó È­º¸Áý

    ·½Ã°£ÀÇ °æ°ú¿¡ ÀÇÇØ ÀçÆǸŰ¡ °ï¶õÇÑ Á¤µµ·Î °¡Ä¡°¡ ÇöÀúÈ÷ °¨¼ÒÇÑ °æ¿ì

    ·ÀüÀÚ»ó°Å·¡ µî¿¡¼­ÀÇ ¼ÒºñÀÚº¸È£¿¡ °üÇÑ ¹ý·üÀÌ Á¤ÇÏ´Â ¼ÒºñÀÚ Ã»¾àöȸ Á¦ÇÑ ³»¿ë¿¡ ÇØ´çµÇ´Â °æ¿ì

    »óÇ° Ç°Àý

    °ø±Þ»ç(ÃâÆÇ»ç) Àç°í »çÁ¤¿¡ ÀÇÇØ Ç°Àý/Áö¿¬µÉ ¼ö ÀÖÀ½

    ¼ÒºñÀÚ ÇÇÇغ¸»ó
    ȯºÒÁö¿¬¿¡ µû¸¥ ¹è»ó

    ·»óÇ°ÀÇ ºÒ·®¿¡ ÀÇÇÑ ±³È¯, A/S, ȯºÒ, Ç°Áúº¸Áõ ¹× ÇÇÇغ¸»ó µî¿¡ °üÇÑ »çÇ×Àº ¼ÒºñÀÚºÐÀïÇØ°á ±âÁØ (°øÁ¤°Å·¡À§¿øȸ °í½Ã)¿¡ ÁØÇÏ¿© 󸮵Ê

    ·´ë±Ý ȯºÒ ¹× ȯºÒÁö¿¬¿¡ µû¸¥ ¹è»ó±Ý Áö±Þ Á¶°Ç, ÀýÂ÷ µîÀº ÀüÀÚ»ó°Å·¡ µî¿¡¼­ÀÇ ¼ÒºñÀÚ º¸È£¿¡ °üÇÑ ¹ý·ü¿¡ µû¶ó ó¸®ÇÔ

    (ÁÖ)KGÀ̴Ͻýº ±¸¸Å¾ÈÀü¼­ºñ½º¼­ºñ½º °¡ÀÔ»ç½Ç È®ÀÎ

    (ÁÖ)ÀÎÅÍÆÄÅ©Ä¿¸Ó½º´Â ȸ¿ø´ÔµéÀÇ ¾ÈÀü°Å·¡¸¦ À§ÇØ ±¸¸Å±Ý¾×, °áÁ¦¼ö´Ü¿¡ »ó°ü¾øÀÌ (ÁÖ)ÀÎÅÍÆÄÅ©Ä¿¸Ó½º¸¦ ÅëÇÑ ¸ðµç °Å·¡¿¡ ´ëÇÏ¿©
    (ÁÖ)KGÀ̴Ͻýº°¡ Á¦°øÇÏ´Â ±¸¸Å¾ÈÀü¼­ºñ½º¸¦ Àû¿ëÇÏ°í ÀÖ½À´Ï´Ù.

    ¹è¼Û¾È³»

    • ±³º¸¹®°í »óÇ°Àº Åùè·Î ¹è¼ÛµÇ¸ç, Ãâ°í¿Ï·á 1~2Àϳ» »óÇ°À» ¹Þ¾Æ º¸½Ç ¼ö ÀÖ½À´Ï´Ù.

    • Ãâ°í°¡´É ½Ã°£ÀÌ ¼­·Î ´Ù¸¥ »óÇ°À» ÇÔ²² ÁÖ¹®ÇÒ °æ¿ì Ãâ°í°¡´É ½Ã°£ÀÌ °¡Àå ±ä »óÇ°À» ±âÁØÀ¸·Î ¹è¼ÛµË´Ï´Ù.

    • ±ººÎ´ë, ±³µµ¼Ò µî ƯÁ¤±â°üÀº ¿ìü±¹ Åù踸 ¹è¼Û°¡´ÉÇÕ´Ï´Ù.

    • ¹è¼Ûºñ´Â ¾÷ü ¹è¼Ûºñ Á¤Ã¥¿¡ µû¸¨´Ï´Ù.

    • - µµ¼­ ±¸¸Å ½Ã 15,000¿ø ÀÌ»ó ¹«·á¹è¼Û, 15,000¿ø ¹Ì¸¸ 2,500¿ø - »óÇ°º° ¹è¼Ûºñ°¡ ÀÖ´Â °æ¿ì, »óÇ°º° ¹è¼Ûºñ Á¤Ã¥ Àû¿ë