Statistical Language Learning (häftad)
Format
Häftad (Paperback)
Språk
Engelska
Antal sidor
190
Utgivningsdatum
1996-09-01
Upplaga
New ed
Förlag
MIT Press
Illustratör/Fotograf
glossary 80 illustrations, bibliog index
Illustrationer
80
Dimensioner
240 x 155 x 15 mm
Vikt
280 g
Antal komponenter
1
ISBN
9780262531412

Statistical Language Learning

Häftad, Engelska, 1996-09-01
297 kr
Skickas inom 2-5 vardagar.
Fri frakt inom Sverige för privatpersoner.
Eugene Charniak breaks new ground in artificial intelligenceresearch by presenting statistical language processing from an artificial intelligence point of view in a text for researchers and scientists with a traditional computer science background.New, exacting empirical methods are needed to break the deadlock in such areas of artificial intelligence as robotics, knowledge representation, machine learning, machine translation, and natural language processing (NLP). It is time, Charniak observes, to switch paradigms. This text introduces statistical language processing techniques;word tagging, parsing with probabilistic context free grammars, grammar induction, syntactic disambiguation, semantic wordclasses, word-sense disambiguation;along with the underlying mathematics and chapter exercises.Charniak points out that as a method of attacking NLP problems, the statistical approach has several advantages. It is grounded in real text and therefore promises to produce usable results, and it offers an obvious way to approach learning: "one simply gathers statistics."Language, Speech, and Communication
Visa hela texten

Kundrecensioner

Har du läst boken? Sätt ditt betyg »

Fler böcker av Eugene Charniak

Recensioner i media

"This is a lovely book." -- David Nye

Bloggat om Statistical Language Learning

Innehållsförteckning

Part 1 The Standard Model: Two Technologies; Morphology and Knowledge of Words; Syntax and Context-Free Grammars; Chart Parsing; Meaning and Semantic Processing; Exercises. Part 2 Statistical Models and the Entropy of English: A Fragment of Probability Theory; Statistical Models; Speech Recognition; Entropy; Markov Chains; Cross Entropy; Cross Entropy as a Model Evaluator; Exercises. Part 3 Hidden Markov Models and Two Applications: Trigram Models of English; Hidden Markov Models; Part-of-Speech Tagging; Exercises. Part 4 Algorithms for Hidden Markov Models: Finding the Most Likely Path; Computing HMM Output Probabilities; HMM Training; Exercises. Part 5 Probabilistic Context-Free Grammars: Probabilistic Grammars; PCFGs and Syntactic Ambiguity; PCFGs and Grammar Induction; PCFGs and Ungrammaticality; PCFGs and Language Modelling; Basic Algorithms for PCFGs; Exercises. Part 6 The Mathematics of PCFGs: Relation of HMMs to PCFGs; Finding Sentence Probabilities for PCFGs; Training PCFGs; Exercises. Part 7 Learning Probabilistic Grammars: Why the Simple Approach Fails; Learning Dependency Grammars; Learning from a Bracketed Corpus; Improving a Partial Grammar; Exercises. Part 8 Syntactic Disambiguation: Simple Methods for Prepositional Phrases; Using Semantic Information; Relative-Clause Attachment; Uniform Use of Lexical/Semantic Information; Exercises. Part 9 Word Classes and Meaning: Clustering; Clustering by Next Word; Clustering with Syntactic Information; Problems with Word Clustering; Exercises. Part 10 Word Senses and Their Disambiguation: Word Senses Using Outside Information; Word Senses Without Outside Information; Meanings and Selectional Restrictions; Discussion; Exercises.