°£Æí°áÁ¦, ½Å¿ëÄ«µå û±¸ÇÒÀÎ
ÀÎÅÍÆÄÅ© ·Ôµ¥Ä«µå 5% (84,550¿ø)
(ÃÖ´ëÇÒÀÎ 10¸¸¿ø / Àü¿ù½ÇÀû 40¸¸¿ø)
ºÏÇǴϾð ·Ôµ¥Ä«µå 30% (62,300¿ø)
(ÃÖ´ëÇÒÀÎ 3¸¸¿ø / 3¸¸¿ø ÀÌ»ó °áÁ¦)
NH¼îÇÎ&ÀÎÅÍÆÄÅ©Ä«µå 20% (71,200¿ø)
(ÃÖ´ëÇÒÀÎ 4¸¸¿ø / 2¸¸¿ø ÀÌ»ó °áÁ¦)
Close

Machine Learning for Business Analytics : Concepts, Techniques, and Applications in R[¾çÀå]

¼Òµæ°øÁ¦

2013³â 9¿ù 9ÀÏ ÀÌÈÄ ´©Àû¼öÄ¡ÀÔ´Ï´Ù.

°øÀ¯Çϱâ
Á¤°¡

89,000¿ø

  • 89,000¿ø

    2,670P (3%Àû¸³)

ÇÒÀÎÇýÅÃ
Àû¸³ÇýÅÃ
  • S-Point Àû¸³Àº ¸¶ÀÌÆäÀÌÁö¿¡¼­ Á÷Á¢ ±¸¸ÅÈ®Á¤ÇϽŠ°æ¿ì¸¸ Àû¸³ µË´Ï´Ù.
Ãß°¡ÇýÅÃ
¹è¼ÛÁ¤º¸
  • 4/29(¿ù) À̳» ¹ß¼Û ¿¹Á¤  (¼­¿ï½Ã °­³²±¸ »ï¼º·Î 512)
  • ¹«·á¹è¼Û
ÁÖ¹®¼ö·®
°¨¼Ò Áõ°¡
  • À̺¥Æ®/±âȹÀü

  • ¿¬°üµµ¼­

  • »óÇ°±Ç

AD

¸ñÂ÷

Foreword by Ravi Bapna xxiForeword by Gareth James xxiiiPreface to the Second R Edition xxvAcknowledgments xxixPART I PRELIMINARIESCHAPTER 1 Introduction 31.1 What Is Business Analytics? 31.2 What Is Machine Learning? 51.3 Machine Learning, AI, and Related Terms 51.4 Big Data 71.5 Data Science 81.6 Why Are There So Many Different Methods? 81.7 Terminology and Notation 91.8 Road Maps to This Book 11Order of Topics 13CHAPTER 2 Overview of the Machine Learning Process 172.1 Introduction 172.2 Core Ideas in Machine Learning 18Classification 18Prediction 18Association Rules and Recommendation Systems 18Predictive Analytics 19Data Reduction and Dimension Reduction 19Data Exploration and Visualization 19Supervised and Unsupervised Learning 202.3 The Steps in a Machine Learning Project 212.4 Preliminary Steps 23Organization of Data 23Predicting Home Values in the West Roxbury Neighborhood 23viiviii CONTENTSLoading and Looking at the Data in R 24Sampling from a Database 26Oversampling Rare Events in Classification Tasks 27Preprocessing and Cleaning the Data 282.5 Predictive Power and Overfitting 35Overfitting 36Creating and Using Data Partitions 382.6 Building a Predictive Model 41Modeling Process 412.7 Using R for Machine Learning on a Local Machine 462.8 Automating Machine Learning Solutions 47Predicting Power Generator Failure 48Uber's Michelangelo 502.9 Ethical Practice in Machine Learning 52Machine Learning Software: The State of the Market (by Herb Edelstein) 53Problems 57PART II DATA EXPLORATION AND DIMENSION REDUCTIONCHAPTER 3 Data Visualization 633.1 Uses of Data Visualization 63Base R or ggplot? 653.2 Data Examples 65Example 1: Boston Housing Data 65Example 2: Ridership on Amtrak Trains 673.3 Basic Charts: Bar Charts, Line Charts, and Scatter Plots 67Distribution Plots: Boxplots and Histograms 70Heatmaps: Visualizing Correlations and Missing Values 733.4 Multidimensional Visualization 75Adding Variables: Color, Size, Shape, Multiple Panels, and Animation 76Manipulations: Rescaling, Aggregation and Hierarchies, Zooming, Filtering 79Reference: Trend Lines and Labels 83Scaling Up to Large Datasets 85Multivariate Plot: Parallel Coordinates Plot 85Interactive Visualization 883.5 Specialized Visualizations 91Visualizing Networked Data 91Visualizing Hierarchical Data: Treemaps 93Visualizing Geographical Data: Map Charts 953.6 Major Visualizations and Operations, by Machine Learning Goal 97Prediction 97Classification 97Time Series Forecasting 97Unsupervised Learning 98Problems 99CONTENTS ixCHAPTER 4 Dimension Reduction 1014.1 Introduction 1014.2 Curse of Dimensionality 1024.3 Practical Considerations 102Example 1: House Prices in Boston 1034.4 Data Summaries 103Summary Statistics 104Aggregation and Pivot Tables 1044.5 Correlation Analysis 1074.6 Reducing the Number of Categories in Categorical Variables 1094.7 Converting a Categorical Variable to a Numerical Variable 1114.8 Principal Component Analysis 111Example 2: Breakfast Cereals 111Principal Components 116Normalizing the Data 117Using Principal Components for Classification and Prediction 1204.9 Dimension Reduction Using Regression Models 1214.10 Dimension Reduction Using Classification and Regression Trees 121Problems 123PART III PERFORMANCE EVALUATIONCHAPTER 5 Evaluating Predictive Performance 1295.1 Introduction 1305.2 Evaluating Predictive Performance 130Naive Benchmark: The Average 131Prediction Accuracy Measures 131Comparing Training and Holdout Performance 133Cumulative Gains and Lift Charts 1335.3 Judging Classifier Performance 136Benchmark: The Naive Rule 136Class Separation 136The Confusion (Classification) Matrix 137Using the Holdout Data 138Accuracy Measures 139Propensities and Threshold for Classification 139Performance in Case of Unequal Importance of Classes 143Asymmetric Misclassification Costs 146Generalization to More Than Two Classes 1495.4 Judging Ranking Performance 150Cumulative Gains and Lift Charts for Binary Data 150Decile-wise Lift Charts 153Beyond Two Classes 154x CONTENTSGains and Lift Charts Incorporating Costs and Benefits 154Cumulative Gains as a Function of Threshold 1555.5 Oversampling 156Creating an Over-sampled Training Set 158Evaluating Model Performance Using a Non-oversampled Holdout Set 159Evaluating Model Performance If Only Oversampled Holdout Set Exists 159Problems 162PART IV PREDICTION AND CLASSIFICATION METHODSCHAPTER 6 Multiple Linear Regression 1676.1 Introduction 1676.2 Explanatory vs Predictive Modeling 1686.3 Estimating the Regression Equation and Prediction 170Example: Predicting the Price of Used Toyota Corolla Cars 171Cross-validation and caret 1756.4 Variable Selection in Linear Regression 176Reducing the Number of Predictors 176How to Reduce the Number of Predictors 178Regularization (Shrinkage Models) 183Problems 188CHAPTER 7 k-Nearest Neighbors (kNN) 1937.1 The k-NN Classifier (Categorical Outcome) 193Determining Neighbors 194Classification Rule 194Example: Riding Mowers 195Choosing k 196Weighted k-NN 199Setting the Cutoff Value 200k-NN with More Than Two Classes 201Converting Categorical Variables to Binary Dummies 2017.2 k-NN for a Numerical Outcome 2017.3 Advantages and Shortcomings of k-NN Algorithms 204Problems 205CHAPTER 8 The Naive Bayes Classifier 2078.1 Introduction 207Threshold Probability Method 208Conditional Probability 208Example 1: Predicting Fraudulent Financial Reporting 2088.2 Applying the Full (Exact) Bayesian Classifier 209Using the "Assign to the Most Probable Class" Method 210Using the Threshold Probability Method 210CONTENTS xiPractical Difficulty with the Complete (Exact) Bayes Procedure 2108.3 Solution: Naive Bayes 211The Naive Bayes Assumption of Conditional Independence 212Using the Threshold Probability Method 212Example 2: Predicting Fraudulent Financial Reports, Two Predictors 213Example 3: Predicting Delayed Flights 214Working with Continuous Predictors 2188.4 Advantages and Shortcomings of the Naive Bayes Classifier 220Problems 223CHAPTER 9 Classification and Regression Trees 2259.1 Introduction 226Tree Structure 227Decision Rules 227Classifying a New Record 2279.2 Classification Trees 228Recursive Partitioning 228Example 1: Riding Mowers 228Measures of Impurity 2319.3 Evaluating the Performance of a Classification Tree 235Example 2: Acceptance of Personal Loan 2369.4 Avoiding Overfitting 239Stopping Tree Growth 242Pruning the Tree 243Best-Pruned Tree 2459.5 Classification Rules from Trees 2479.6 Classification Trees for More Than Two Classes 2489.7 Regression Trees 249Prediction 250Measuring Impurity 250Evaluating Performance 2509.8 Advantages and Weaknesses of a Tree 2509.9 Improving Prediction: Random Forests and Boosted Trees 252Random Forests 252Boosted Trees 254Problems 257CHAPTER 10 Logistic Regression 26110.1 Introduction 26110.2 The Logistic Regression Model 26310.3 Example: Acceptance of Personal Loan 264Model with a Single Predictor 265Estimating the Logistic Model from Data: Computing Parameter Estimates 267Interpreting Results in Terms of Odds (for a Profiling Goal) 27010.4 Evaluating Classification Performance 271xii CONTENTS10.5 Variable Selection 27310.6 Logistic Regression for Multi-Class Classification 274Ordinal Classes 275Nominal Classes 27610.7 Example of Complete Analysis: Predicting Delayed Flights 277Data Preprocessing 282Model-Fitting and Estimation 282Model Interpretation 282Model Performance 284Variable Selection 285Problems 289CHAPTER 11 Neural Nets 29311.1 Introduction 29311.2 Concept and Structure of a Neural Network 29411.3 Fitting a Network to Data 295Example 1: Tiny Dataset 295Computing Output of Nodes 296Preprocessing the Data 299Training the Model 300Example 2: Classifying Accident Severity 304Avoiding Overfitting 305Using the Output for Prediction and Classification 30511.4 Required User Input 30711.5 Exploring the Relationship Between Predictors and Outcome 30811.6 Deep Learning 309Convolutional Neural Networks (CNNs) 310Local Feature Map 311A Hierarchy of Features 311The Learning Process 312Unsupervised Learning 312Example: Classification of Fashion Images 313Conclusion 32011.7 Advantages and Weaknesses of Neural Networks 320Problems 322CHAPTER 12 Discriminant Analysis 32512.1 Introduction 325Example 1: Riding Mowers 326Example 2: Personal Loan Acceptance 32712.2 Distance of a Record from a Class 32712.3 Fisher's Linear Classification Functions 32912.4 Classification Performance of Discriminant Analysis 33312.5 Prior Probabilities 33412.6 Unequal Misclassification Costs 334CONTENTS xiii12.7 Classifying More Than Two Classes 336Example 3: Medical Dispatch to Accident Scenes 33612.8 Advantages and Weaknesses 339Problems 341CHAPTER 13 Generating, Comparing, and Combining MultipleModels34513.1 Ensembles 346Why Ensembles Can Improve Predictive Power 346Simple Averaging or Voting 348Bagging 349Boosting 349Bagging and Boosting in R 349Stacking 350Advantages and Weaknesses of Ensembles 35113.2 Automated Machine Learning (AutoML) 352AutoML: Explore and Clean Data 352AutoML: Determine Machine Learning Task 353AutoML: Choose Features and Machine Learning Methods 354AutoML: Evaluate Model Performance 354AutoML: Model Deployment 356Advantages and Weaknesses of Automated Machine Learning 35713.3 Explaining Model Predictions 35813.4 Summary 360Problems 362PART V INTERVENTION AND USER FEEDBACKCHAPTER 14 Interventions: Experiments, Uplift Models, andReinforcement Learning36714.1 A/B Testing 368Example: Testing a New Feature in a Photo Sharing App 369The Statistical Test for Comparing Two Groups (T-Test) 370Multiple Treatment Groups: A/B/n Tests 372Multiple A/B Tests and the Danger of Multiple Testing 37214.2 Uplift (Persuasion) Modeling 373Gathering the Data 374A Simple Model 376Modeling Individual Uplift 376Computing Uplift with R 378Using the Results of an Uplift Model 37814.3 Reinforcement Learning 380Explore-Exploit: Multi-armed Bandits 380Example of Using a Contextual Multi-Arm Bandit for Movie Recommendations 382Markov Decision Process (MDP) 383xiv CONTENTS14.4 Summary 388Problems 390PART VI MINING RELATIONSHIPS AMONG RECORDSCHAPTER 15 Association Rules and Collaborative Filtering 39315.1 Association Rules 394Discovering Association Rules in Transaction Databases 394Example 1: Synthetic Data on Purchases of Phone Faceplates 394Generating Candidate Rules 395The Apriori Algorithm 397Selecting Strong Rules 397Data Format 399The Process of Rule Selection 400Interpreting the Results 401Rules and Chance 403Example 2: Rules for Similar Book Purchases 40515.2 Collaborative Filtering 407Data Type and Format 407Example 3: Netflix Prize Contest 408User-Based Collaborative Filtering: "People Like You" 409Item-Based Collaborative Filtering 411Evaluating Performance 412Example 4: Predicting Movie Ratings with MovieLens Data 413Advantages and Weaknesses of Collaborative Filtering 416Collaborative Filtering vs Association Rules 41715.3 Summary 419Problems 421CHAPTER 16 Cluster Analysis 42516.1 Introduction 426Example: Public Utilities 42716.2 Measuring Distance Between Two Records 429Euclidean Distance 429Normalizing Numerical Variables 430Other Distance Measures for Numerical Data 432Distance Measures for Categorical Data 433Distance Measures for Mixed Data 43416.3 Measuring Distance Between Two Clusters 434Minimum Distance 434Maximum Distance 435Average Distance 435Centroid Distance 43516.4 Hierarchical (Agglomerative) Clustering 437Single Linkage 437CONTENTS xvComplete Linkage 438Average Linkage 438Centroid Linkage 438Ward's Method 438Dendrograms: Displaying Clustering Process and Results 439Validating Clusters 441Limitations of Hierarchical Clustering 44316.5 Non-Hierarchical Clustering: The k-Means Algorithm 444Choosing the Number of Clusters (k) 445Problems 450PART VII FORECASTING TIME SERIESCHAPTER 17 Handling Time Series 45517.1 Introduction 45517.2 Descriptive vs Predictive Modeling 45717.3 Popular Forecasting Methods in Business 457Combining Methods 45717.4 Time Series Components 458Example: Ridership on Amtrak Trains 45817.5 Data Partitioning and Performance Evaluation 463Benchmark Performance: Naive Forecasts 463Generating Future Forecasts 465Problems 466CHAPTER 18 Regression-Based Forecasting 46918.1 A Model with Trend 469Linear Trend 469Exponential Trend 473Polynomial Trend 47418.2 A Model with Seasonality 47618.3 A Model with Trend and Seasonality 47818.4 Autocorrelation and ARIMA Models 479Computing Autocorrelation 480Improving Forecasts by Integrating Autocorrelation Information 483Evaluating Predictability 486Problems 489CHAPTER 19 Smoothing and Deep Learning Methods forForecasting49919.1 Smoothing Methods: Introduction 50019.2 Moving Average 500Centered Moving Average for Visualization 500Trailing Moving Average for Forecasting 501xvi CONTENTSChoosing Window Width (w) 50419.3 Simple Exponential Smoothing 505Choosing Smoothing Parameter alpha 506Relation Between Moving Average and Simple Exponential Smoothing 50619.4 Advanced Exponential Smoothing 507Series with a Trend 508Series with a Trend and Seasonality 508Series with Seasonality (No Trend) 50919.5 Deep Learning for Forecasting 511Problems 516PART VIII DATA ANALYTICSCHAPTER 20 Social Network Analytics 52720.1 Introduction 52720.2 Directed vs Undirected Networks 52920.3 Visualizing and Analyzing Networks 530Plot Layout 530Edge List 533Adjacency Matrix 533Using Network Data in Classification and Prediction 53420.4 Social Data Metrics and Taxonomy 534Node-Level Centrality Metrics 535Egocentric Network 536Network Metrics 53620.5 Using Network Metrics in Prediction and Classification 538Link Prediction 538Entity Resolution 540Collaborative Filtering 54220.6 Collecting Social Network Data with R 54520.7 Advantages and Disadvantages 545Problems 548CHAPTER 21 Text Mining 54921.1 Introduction 54921.2 The Tabular Representation of Text 55021.3 Bag-of-Words vs Meaning Extraction at Document Level 55121.4 Preprocessing the Text 552Tokenization 553Text Reduction 555Presence/Absence vs Frequency 556Term Frequency-Inverse Document Frequency (TF-IDF) 557From Terms to Concepts: Latent Semantic Indexing 558Extracting Meaning 559From Terms to High-Dimensional Word Vectors: Word2Vec or GloVe 559CONTENTS xvii21.5 Implementing Machine Learning Methods 56021.6 Example: Online Discussions on Autos and Electronics 560Importing and Labeling the Records 561Text Preprocessing in R 561Producing a Concept Matrix 561Fitting a Predictive Model 562Prediction 56421.7 Example: Sentiment Analysis of Movie Reviews 564Data Loading, Preparation, and Partitioning 565Generating and Applying the GloVe Model 565Fitting a Predictive Model 56621.8 Summary 568Problems 570CHAPTER 22 Responsible Data Science 57322.1 Introduction 57322.2 Unintentional Harm 57422.3 Legal Considerations 57622.4 Principles of Responsible Data Science 577Non-maleficence 578Fairness 578Transparency 579Accountability 580Data Privacy and Security 58022.5 A Responsible Data Science Framework 580Justification 581Assembly 581Data Preparation 582Modeling 583Auditing 58322.6 Documentation Tools 584Impact Statements 584Model Cards 585Datasheets 586Audit Reports 58622.7 Example: Applying the RDS Framework to the COMPAS Example 588Unanticipated Uses 588Ethical Concerns 588Protected Groups 588Data Issues 589Fitting the Model 589Auditing the Model 591Bias Mitigation 59622.8 Summary 598Problems 599xviii CONTENTSPART IX CASESCHAPTER 23 Cases 60323.1 Charles Book Club 603The Book Industry 603Database Marketing at Charles 604Machine Learning Techniques 606Assignment 60823.2 German Credit 610Background 610Data 610Assignment 61423.3 Tayko Software Cataloger 615Background 615The Mailing Experiment 615Data 615Assignment 61723.4 Political Persuasion 619Background 619Predictive Analytics Arrives in US Politics 619Political Targeting 619Uplift 620Data 621Assignment 62123.5 Taxi Cancellations 623Business Situation 623Assignment 62323.6 Segmenting Consumers of Bath Soap 625Business Situation 625Key Problems 625Data 626Measuring Brand Loyalty 626Assignment 62623.7 Direct-Mail Fundraising 629Background 629Data 629Assignment 62923.8 Catalog Cross-Selling 632Background 632Assignment 63223.9 Time Series Case: Forecasting Public Transportation Demand 634Background 634Problem Description 634Available Data 634Assignment Goal 634Assignment 635CONTENTS xixTips and Suggested Steps 63523.10 Loan Approval 636Background 636Regulatory Requirements 636Getting Started 636Assignment 637References 639R Packages Used in the Book 643Data Files Used in the Book 647Index 649

ÀúÀÚ¼Ò°³

°¥¸®Æ® ½´¹«¿¤¸® [Àú] ½ÅÀ۾˸² SMS½Åû
»ý³â¿ùÀÏ -

ÇØ´çÀÛ°¡¿¡ ´ëÇÑ ¼Ò°³°¡ ¾ø½À´Ï´Ù.

Peter C. Bruce [Àú] ½ÅÀ۾˸² SMS½Åû
»ý³â¿ùÀÏ -

ÇØ´çÀÛ°¡¿¡ ´ëÇÑ ¼Ò°³°¡ ¾ø½À´Ï´Ù.

Gedeck, Peter [Àú] ½ÅÀ۾˸² SMS½Åû
»ý³â¿ùÀÏ -

ÇØ´çÀÛ°¡¿¡ ´ëÇÑ ¼Ò°³°¡ ¾ø½À´Ï´Ù.

Nitin R. Patel [Àú] ½ÅÀ۾˸² SMS½Åû
»ý³â¿ùÀÏ -

¸Þ»çÃß¼¼Ã÷ÁÖ ÄÉÀӺ긮Áö¿¡ ¼ÒÀçÇÑ ½ÎÀÌÅÚ(Cytel) ÁÖ½Äȸ»çÀÇ °øµ¿ â¾÷ÀÚ·Î, ÇöÀç ÀÌ»ç·Î ÀçÁ÷ ÁßÀÌ´Ù. ¹Ì±¹ Åë°èÇÐȸÀÇ Æç·Î¿ì·Î¼­ MIT¿Í ÇϹöµå´ëÇб³ÀÇ ¹æ¹® ±³¼ö¸¦ ¿ªÀÓÇÏ¿´´Ù.

°æÁ¦°æ¿µ/Àι®»çȸ ºÐ¾ß¿¡¼­ ¸¹Àº ȸ¿øÀÌ ±¸¸ÅÇÑ Ã¥

    ¸®ºä

    0.0 (ÃÑ 0°Ç)

    100ÀÚÆò

    ÀÛ¼º½Ã À¯ÀÇ»çÇ×

    ÆòÁ¡
    0/100ÀÚ
    µî·ÏÇϱâ

    100ÀÚÆò

    0.0
    (ÃÑ 0°Ç)

    ÆǸÅÀÚÁ¤º¸

    • ÀÎÅÍÆÄÅ©µµ¼­¿¡ µî·ÏµÈ ¿ÀǸ¶ÄÏ »óÇ°Àº ±× ³»¿ë°ú Ã¥ÀÓÀÌ ¸ðµÎ ÆǸÅÀÚ¿¡°Ô ÀÖÀ¸¸ç, ÀÎÅÍÆÄÅ©µµ¼­´Â ÇØ´ç »óÇ°°ú ³»¿ë¿¡ ´ëÇØ Ã¥ÀÓÁöÁö ¾Ê½À´Ï´Ù.

    »óÈ£

    (ÁÖ)±³º¸¹®°í

    ´ëÇ¥ÀÚ¸í

    ¾Èº´Çö

    »ç¾÷ÀÚµî·Ï¹øÈ£

    102-81-11670

    ¿¬¶ôó

    1544-1900

    ÀüÀÚ¿ìÆíÁÖ¼Ò

    callcenter@kyobobook.co.kr

    Åë½ÅÆǸž÷½Å°í¹øÈ£

    01-0653

    ¿µ¾÷¼ÒÀçÁö

    ¼­¿ïƯº°½Ã Á¾·Î±¸ Á¾·Î 1(Á¾·Î1°¡,±³º¸ºôµù)

    ±³È¯/ȯºÒ

    ¹ÝÇ°/±³È¯ ¹æ¹ý

    ¡®¸¶ÀÌÆäÀÌÁö > Ãë¼Ò/¹ÝÇ°/±³È¯/ȯºÒ¡¯ ¿¡¼­ ½Åû ¶Ç´Â 1:1 ¹®ÀÇ °Ô½ÃÆÇ ¹× °í°´¼¾ÅÍ(1577-2555)¿¡¼­ ½Åû °¡´É

    ¹ÝÇ°/±³È¯°¡´É ±â°£

    º¯½É ¹ÝÇ°ÀÇ °æ¿ì Ãâ°í¿Ï·á ÈÄ 6ÀÏ(¿µ¾÷ÀÏ ±âÁØ) À̳»±îÁö¸¸ °¡´É
    ´Ü, »óÇ°ÀÇ °áÇÔ ¹× °è¾à³»¿ë°ú ´Ù¸¦ °æ¿ì ¹®Á¦Á¡ ¹ß°ß ÈÄ 30ÀÏ À̳»

    ¹ÝÇ°/±³È¯ ºñ¿ë

    º¯½É ȤÀº ±¸¸ÅÂø¿À·Î ÀÎÇÑ ¹ÝÇ°/±³È¯Àº ¹Ý¼Û·á °í°´ ºÎ´ã
    »óÇ°À̳ª ¼­ºñ½º ÀÚüÀÇ ÇÏÀÚ·Î ÀÎÇÑ ±³È¯/¹ÝÇ°Àº ¹Ý¼Û·á ÆǸÅÀÚ ºÎ´ã

    ¹ÝÇ°/±³È¯ ºÒ°¡ »çÀ¯

    ·¼ÒºñÀÚÀÇ Ã¥ÀÓ ÀÖ´Â »çÀ¯·Î »óÇ° µîÀÌ ¼Õ½Ç ¶Ç´Â ÈÑ¼ÕµÈ °æ¿ì
    (´ÜÁö È®ÀÎÀ» À§ÇÑ Æ÷Àå ÈѼÕÀº Á¦¿Ü)

    ·¼ÒºñÀÚÀÇ »ç¿ë, Æ÷Àå °³ºÀ¿¡ ÀÇÇØ »óÇ° µîÀÇ °¡Ä¡°¡ ÇöÀúÈ÷ °¨¼ÒÇÑ °æ¿ì
    ¿¹) È­ÀåÇ°, ½ÄÇ°, °¡ÀüÁ¦Ç°(¾Ç¼¼¼­¸® Æ÷ÇÔ) µî

    ·º¹Á¦°¡ °¡´ÉÇÑ »óÇ° µîÀÇ Æ÷ÀåÀ» ÈѼÕÇÑ °æ¿ì
    ¿¹) À½¹Ý/DVD/ºñµð¿À, ¼ÒÇÁÆ®¿þ¾î, ¸¸È­Ã¥, ÀâÁö, ¿µ»ó È­º¸Áý

    ·½Ã°£ÀÇ °æ°ú¿¡ ÀÇÇØ ÀçÆǸŰ¡ °ï¶õÇÑ Á¤µµ·Î °¡Ä¡°¡ ÇöÀúÈ÷ °¨¼ÒÇÑ °æ¿ì

    ·ÀüÀÚ»ó°Å·¡ µî¿¡¼­ÀÇ ¼ÒºñÀÚº¸È£¿¡ °üÇÑ ¹ý·üÀÌ Á¤ÇÏ´Â ¼ÒºñÀÚ Ã»¾àöȸ Á¦ÇÑ ³»¿ë¿¡ ÇØ´çµÇ´Â °æ¿ì

    »óÇ° Ç°Àý

    °ø±Þ»ç(ÃâÆÇ»ç) Àç°í »çÁ¤¿¡ ÀÇÇØ Ç°Àý/Áö¿¬µÉ ¼ö ÀÖÀ½

    ¼ÒºñÀÚ ÇÇÇغ¸»ó
    ȯºÒÁö¿¬¿¡ µû¸¥ ¹è»ó

    ·»óÇ°ÀÇ ºÒ·®¿¡ ÀÇÇÑ ±³È¯, A/S, ȯºÒ, Ç°Áúº¸Áõ ¹× ÇÇÇغ¸»ó µî¿¡ °üÇÑ »çÇ×Àº ¼ÒºñÀÚºÐÀïÇØ°á ±âÁØ (°øÁ¤°Å·¡À§¿øȸ °í½Ã)¿¡ ÁØÇÏ¿© 󸮵Ê

    ·´ë±Ý ȯºÒ ¹× ȯºÒÁö¿¬¿¡ µû¸¥ ¹è»ó±Ý Áö±Þ Á¶°Ç, ÀýÂ÷ µîÀº ÀüÀÚ»ó°Å·¡ µî¿¡¼­ÀÇ ¼ÒºñÀÚ º¸È£¿¡ °üÇÑ ¹ý·ü¿¡ µû¶ó ó¸®ÇÔ

    (ÁÖ)KGÀ̴Ͻýº ±¸¸Å¾ÈÀü¼­ºñ½º¼­ºñ½º °¡ÀÔ»ç½Ç È®ÀÎ

    (ÁÖ)ÀÎÅÍÆÄÅ©Ä¿¸Ó½º´Â ȸ¿ø´ÔµéÀÇ ¾ÈÀü°Å·¡¸¦ À§ÇØ ±¸¸Å±Ý¾×, °áÁ¦¼ö´Ü¿¡ »ó°ü¾øÀÌ (ÁÖ)ÀÎÅÍÆÄÅ©Ä¿¸Ó½º¸¦ ÅëÇÑ ¸ðµç °Å·¡¿¡ ´ëÇÏ¿©
    (ÁÖ)KGÀ̴Ͻýº°¡ Á¦°øÇÏ´Â ±¸¸Å¾ÈÀü¼­ºñ½º¸¦ Àû¿ëÇÏ°í ÀÖ½À´Ï´Ù.

    ¹è¼Û¾È³»

    • ±³º¸¹®°í »óÇ°Àº Åùè·Î ¹è¼ÛµÇ¸ç, Ãâ°í¿Ï·á 1~2Àϳ» »óÇ°À» ¹Þ¾Æ º¸½Ç ¼ö ÀÖ½À´Ï´Ù.

    • Ãâ°í°¡´É ½Ã°£ÀÌ ¼­·Î ´Ù¸¥ »óÇ°À» ÇÔ²² ÁÖ¹®ÇÒ °æ¿ì Ãâ°í°¡´É ½Ã°£ÀÌ °¡Àå ±ä »óÇ°À» ±âÁØÀ¸·Î ¹è¼ÛµË´Ï´Ù.

    • ±ººÎ´ë, ±³µµ¼Ò µî ƯÁ¤±â°üÀº ¿ìü±¹ Åù踸 ¹è¼Û°¡´ÉÇÕ´Ï´Ù.

    • ¹è¼Ûºñ´Â ¾÷ü ¹è¼Ûºñ Á¤Ã¥¿¡ µû¸¨´Ï´Ù.

    • - µµ¼­ ±¸¸Å ½Ã 15,000¿ø ÀÌ»ó ¹«·á¹è¼Û, 15,000¿ø ¹Ì¸¸ 2,500¿ø - »óÇ°º° ¹è¼Ûºñ°¡ ÀÖ´Â °æ¿ì, »óÇ°º° ¹è¼Ûºñ Á¤Ã¥ Àû¿ë