神經(jīng)網(wǎng)絡

出版時間:2006-8  出版社:清華大學出版社  作者:[印度]Satish Kumar  頁數(shù):736  
Tag標簽:無  

內(nèi)容概要

本書從理論和實際應用出發(fā),全面系統(tǒng)地介紹神經(jīng)網(wǎng)絡的基本模型、基本方法和基本技術,涵蓋了神經(jīng)系統(tǒng)科學、統(tǒng)計模式識別、支撐向量機、模糊系統(tǒng)、軟件計算與動態(tài)系統(tǒng)等內(nèi)容。本書對神經(jīng)網(wǎng)絡的各種基本模型做了深入研究,對神經(jīng)網(wǎng)絡的最新發(fā)展趨勢和主要研究方向也都進行了全面而綜合的介紹,每章都包含大量例題、習題,對所有模型不僅給出了實際的應用示例,還提供了詳細的MATHLAB代碼,是一本很好的神經(jīng)網(wǎng)絡教材。    本書適合作為相關專業(yè)研究生或本科高年級學生的教材,也是神經(jīng)網(wǎng)絡的科研人員的參考書。

作者簡介

作者:(印)庫馬爾

書籍目錄

Foreword PrefacMore Acknowledgements Part Ⅰ Traces of History and A Neuroscience Briefer  1. Brain Style Computing: Origins and Issues   1.1 From the Greeks to the Renaissance    1.2 The Advent of Modern Neuroscience   1.3 On the Road to Artificial Intelligence    1.4 Classical AI and Neural Networks    1.5 Hybrid Intelligent Systems    Chapter Summary    Bibliographic Remarks  2. Lessons from Neuroscience   2.1 The Human Brain    2.2 Biological Neurons    Chapter Summary   Bibliographic Remarks Part Ⅱ Feedforward Neural Networks and Supervised Learning  3. Artificial Neurons, Neural Networks and Architectures   3.1 Neuron Abstraction    3.2 Neuron Signal Functions   3.3 Mathematical Preliminaries    3.4 Neural Networks Defined    3.5 Architectures: Feedforward and Feedback   3.6 Salient Properties and Application Domains of Neural Networks    Chapter Summary    Bibliographic Remarks    Review Questions 4. Geometry of Binary Threshold Neurons and Their Networks   4.1 Pattern Recognition and Data Classification   4.2 Convex Sets, Convex Hulls and Linear Separability   4.3 Space of Boolean Functions   4.4 Binary Neurons are Pattern Dichotomizers   4.5 Non-linearly Separable Problems   4.6 Capacity of a Simple Threshold Logic Neuron   4.7 Revisiting the XOR Problem   4.8 Multilayer Networks   4.9 How Many Hidden Nodes are Enough?    Chapter Summary    Bibliographic Remarks    Review Questions  5. Supervised LearningⅠ: Perceptrons and LMS   5.1 Learning and Memory   5.2 From Synapses to Behaviour: The Case of Aplysia  5.3 Learning Algorithms   5.4 Error Correction and Gradient Descent Rules   5.5 The Learning Objective for TLNs   5.6 Pattern Space and Weight Space   5.7 Perceptron Learning Algorithm   5.8 Perceptron Convergence Theorem  5.9 A Handworked Example and MATLAB Simulation    5.10 Perceptron Learning and Non-separable Sets   5.11 Handling Linearly Non-separable Sets     5.12 a–Least Mean Square Learning     5.13 MSE Error Surface and its Geometry   5.14 Steepest Descent Search with Exact Gradient Information    5.15 u–LMS: Approximate Gradient Descent     5.16 Application of LMS to Noise Cancellation    Chapter Summary    Bibliographic Remarks    Review Questions  6. Supervised Learning Ⅱ: Backpropagation and Beyond   6.1 Multilayered Network Architectures    6.2 Backpropagation Learning Algorithm    6.3 Handworked Example   6.4 MATLAB Simulation Examples   6.5 Practical Considerations in Implementing the BP Algorithm   6.6 Structure Growing Algorithms   6.7 Fast Relatives of Backpropagation   6.8 Universal Function Approximation and Neural Networks   6.9 Applications of Feedforward Neural Networks  6.10 Reinforcement Learning: A Brief Review    Chapter Summary    Bibliographic Remarks    Review Questions  7. Neural Networks: A Statistical Pattern Recognition Perspective   7.1 Introduction   7.2 Bayes’ Theorem   7.3 Two Instructive MATLAB Simulations   7.4 Implementing Classification Decisions with Bayes’ Theorem   7.5 Probabilistic Interpretation of a Neuron Discriminant Function   7.6 MATLAB Simulation: Plotting Bayesian Decision Boundaries   7.7 Interpreting Neuron Signals as Probabilities  7.8 Multilayered Networks, Error Functions and Posterior Probabilities   7.9 Error Functions for Classification Problems    Chapter Summary    Bibliographic Remarks    Review Questions  8. Focussing on Generalization: Support Vector Machines and Radial Basis Function Networks   8.1 Learning From Examples and Generalization    8.2 Statistical Learning Theory Briefer   8.3 Support Vector Machines  8.4 Radial Basis Function Networks   8.5 Regularization Theory Route to RBFNs  8.6 Generalized Radial Basis Function Network   8.7 Learning in RBFN’s   8.8 Image Classification Application   8.9 Other Models For Valid Generalization   Chapter Summary    Bibliographic Remarks    Review Questions Part Ⅲ   Recurrent Neurodynamical Systems Part Ⅳ  Contemporary Topics Appendix A: Neural Network Hardware Appendix B: Web Pointers Bibliography Index

圖書封面

圖書標簽Tags

評論、評分、閱讀與下載


    神經(jīng)網(wǎng)絡 PDF格式下載


用戶評論 (總計5條)

 
 

  •   好!!!!
  •   就是送貨太慢了。。
  •   覺得還是先看中文,有了一定的基礎后再看英文比較好。
  •   經(jīng)典書目,雖然沒看應該不錯
  •      螞蟻在爬行時看上去是那么地自信,似乎有自己的行動計劃。否則,它們?nèi)绾谓M織起螞蟻社會的“高速公路”、建造起精致的巢穴和發(fā)動大規(guī)模的戰(zhàn)爭?
       事實上,這種看法大錯特錯。螞蟻并不是聰明的工程師、建筑師或者戰(zhàn)士——至少對于單個螞蟻來說是這樣的,大多數(shù)螞蟻對于下一步該做什么可以說是茫然無知的。斯坦福大學的生物學家黛博拉·M·戈登(Deborah M.Gordon)認為:如果觀察單只螞蟻嘗試做成某件事的話,你就會發(fā)現(xiàn)它是多么地力不從心。“螞蟻并不聰明,聰明的是螞蟻群體?!?br />    螞蟻群體能夠解決個體所無法解決的問題,如以最短的路徑到達最豐富的食物源,給工蟻分派各種不同的任務,或者在外敵入侵時保衛(wèi)自己的領土。作為個體,螞蟻微小得不堪一擊,但是作為群體,它們能夠?qū)Νh(huán)境做出迅速而有效的反應,其“武器”就是群體智能。
       螞蟻和蜜蜂的群體智能來自哪里?個體簡單行為如何形成復雜的群體行為?如果許多個體不協(xié)調(diào),成百上千的蜜蜂又怎么能做出某個重要決定?是什么讓一群鯡魚在一瞬間改變行動方向的?沒有一個個體能夠掌控全局,這些生物的群體能力似乎不可思議,生物學家也一直困惑不已。但在過去的數(shù)十年里,研究人員有了一些有趣的發(fā)現(xiàn)。
 

250萬本中文圖書簡介、評論、評分,PDF格式免費下載。 第一圖書網(wǎng) 手機版

京ICP備13047387號-7