Recurrent Neural Networks (inbunden)
Format
Inbunden (Hardback)
Språk
Engelska
Antal sidor
416
Utgivningsdatum
1999-12-01
Förlag
CRC Press Inc
Medarbetare
Jain, L.C.
Illustrationer
11 black & white tables
Dimensioner
235 x 156 x 19 mm
Antal komponenter
1
Komponenter
Contains 13 Hardbacks
ISBN
9780849371813

Recurrent Neural Networks

Design and Applications

Inbunden,  Engelska, 1999-12-01
1049
Tillfälligt slut – klicka "Bevaka" för att få ett mejl så fort boken går att köpa igen.
With existent uses ranging from motion detection to music synthesis to financial forecasting, recurrent neural networks have generated widespread attention. The tremendous interest in these networks drives Recurrent Neural Networks: Design and Applications, a summary of the design, applications, current research, and challenges of this subfield of artificial neural networks. This overview incorporates every aspect of recurrent neural networks. It outlines the wide variety of complex learning techniques and associated research projects. Each chapter addresses architectures, from fully connected to partially connected, including recurrent multilayer feedforward. It presents problems involving trajectories, control systems, and robotics, as well as RNN use in chaotic systems. The authors also share their expert knowledge of ideas for alternate designs and advances in theoretical aspects. The dynamical behavior of recurrent neural networks is useful for solving problems in science, engineering, and business. This approach will yield huge advances in the coming years. Recurrent Neural Networks illuminates the opportunities and provides you with a broad view of the current events in this rich field.
Visa hela texten

Kundrecensioner

Har du läst boken? Sätt ditt betyg »

Fler böcker av författarna

Innehållsförteckning

INTRODUCTION Overview Design Issues and Theory Applications Future Directions RECURRENT NEURAL NETWORKS FOR OPTIMIZATION: THE STATE OF THE ART Introduction Continuous-Time Neural Networks for QP and LCP Discrete-Time Neural Networks for QP and LCP Simulation Results Concluding Remarks EFFICIENT SECOND-ORDER LEARNING ALGORITHMS FOR DISCRETE-TIME RECURRENT NEURAL NETWORKS Introduction Spatio x Spatio-Temporal Processing Computational Capability Recurrent Neural Networks as Nonlinear Dynamic Systems Recurrent Neural Networks and Second-Order Learning Algorithms Recurrent Neural Network Architectures State Space Representation for Recurrent Neural Networks Second Order Information in Optimization-Based Learning Algorithms The Conjugate Gradient Algorithm An Improved SGM Method The Learning Algorithm for Recurrent Neural Networks Simulation Results Concluding Remarks DESIGNING HIGH ORDER RECURRENT NETWORKS FOR BAYESIAN BELIEF REVISION Introduction Belief Revision and Reasoning Under Uncertainty Hopfield Networks and Mean Field Annealing High Order Recurrent Networks Efficient Data Structures for Implementing HORNs Designing HORNs for Belief Revision Conclusions EQUIVALENCE IN KNOWLEDGE REPRESENTATION: AUTOMATA, RECURRENT NEURAL NETWORKS, AND DYNAMICAL FUZZY SYSTEMS Introduction Fuzzy Finite State Automata Representation of Fuzzy States Automata Transformation Network Architecture Network Stability Analysis Simulations Conclusions LEARNING LONG-TERM DEPENDENCIES IN NARX RECURRENT NEURAL NETWORKS Introduction Vanishing Gradients and Long-Term Dependencies NARX Networks An Intuitive Explanation of NARX Network Behavior Experimental Results Conclusion OSCILLATION RESPONSES IN A CHAOTIC RECURRENT NETWORK Introduction Progression to Chaos External Patterns Dynamic Adjustment of Pattern Strength Characteristics of the Pattern-to-Oscillation Map Discussion LESSON FROM LANGUAGE LEARNING Introduction Lesson 1: Language Learning is Hard Lesson 2: When Possible, Search a Smaller Space Lesson 3: Search the most Likely Places First Lesson 4: Order your Training Data Summary RECURRENT AUTOASSOCIATIVE NETWORKS: DEVELOPING DISTRIBUTED REPRESENTATIONS OF STRUCTURED SEQUENCES BY AUTOASSOCIATION Introduction Sequences, Hierarchy, and Representations Neural Networks and Sequential Processing Recurrent Autoassociative Networks A Cascade of RANs Going Further to a Cognitive Model Discussion Conclusions COMPARISON OF RECURRENT NEURAL NETWORKS FOR TRAJECTORY GENERATION Introduction Architecture Training Set Error Function and Performance Metric Training Algorithms Simulations Conclusions TRAINING ALGORITHMS FOR RECURRENT NEURAL NETS THAT ELIMINATE THE NEED FOR COMPUTATION OF ERROR GRADIENTS WITH APPLICATION TO TRAJECTORY PRODUCTION PROBLEM Introduction Description of the Learning Problem and some Issues in Spatiotemporal Training Training by Methods of Learning Automata Training by Simplex Optimization Method Conclusions TRAINING RECURRENT NEURAL NETWORKS FOR FILTERING AND CONTROL Introduction Preliminaries Principles of Dynamic Learning Dynamic Backprop for the LDRN Neurocontrol Application Recurrent Filter Summary REMEMBERING HOW TO BEHAVE: RECURRENT NEURAL NETWORKS FOR ADAPTIVE ROBOT BEHAVIOR Introduction Background Recurrent Neural Networks for Adaptive Robot Behavior Summary and Discussion