James M. Ortega - Böcker
Visar alla böcker från författaren James M. Ortega. Handla med fri frakt och snabb leverans.
6 produkter
6 produkter
1 064 kr
Skickas inom 10-15 vardagar
Linear algebra and matrix theory are essentially synonymous terms for an area of mathematics that has become one of the most useful and pervasive tools in a wide range of disciplines. It is also a subject of great mathematical beauty. In consequence of both of these facts, linear algebra has increasingly been brought into lower levels of the curriculum, either in conjunction with the calculus or separate from it but at the same level. A large and still growing number of textbooks has been written to satisfy this need, aimed at students at the junior, sophomore, or even freshman levels. Thus, most students now obtaining a bachelor's degree in the sciences or engineering have had some exposure to linear algebra. But rarely, even when solid courses are taken at the junior or senior levels, do these students have an adequate working knowledge of the subject to be useful in graduate work or in research and development activities in government and industry. In particular, most elementary courses stop at the point of canonical forms, so that while the student may have "seen" the Jordan and other canonical forms, there is usually little appreciation of their usefulness. And there is almost never time in the elementary courses to deal with more specialized topics like nonnegative matrices, inertia theorems, and so on. In consequence, many graduate courses in mathematics, applied mathe matics, or applications develop certain parts of matrix theory as needed.
1 625 kr
Skickas inom 10-15 vardagar
Although the origins of parallel computing go back to the last century, it was only in the 1970s that parallel and vector computers became available to the scientific community. The first of these machines-the 64 processor llliac IV and the vector computers built by Texas Instruments, Control Data Corporation, and then CRA Y Research Corporation-had a somewhat limited impact. They were few in number and available mostly to workers in a few government laboratories. By now, however, the trickle has become a flood. There are over 200 large-scale vector computers now installed, not only in government laboratories but also in universities and in an increasing diversity of industries. Moreover, the National Science Foundation's Super computing Centers have made large vector computers widely available to the academic community. In addition, smaller, very cost-effective vector computers are being manufactured by a number of companies. Parallelism in computers has also progressed rapidly. The largest super computers now consist of several vector processors working in parallel. Although the number of processors in such machines is still relatively small (up to 8), it is expected that an increasing number of processors will be added in the near future (to a total of 16 or 32). Moreover, there are a myriad of research projects to build machines with hundreds, thousands, or even more processors. Indeed, several companies are now selling parallel machines, some with as many as hundreds, or even tens of thousands, of processors.
572 kr
Skickas inom 7-10 vardagar
Describes a selection of important parallel algorithms for matrix computations. Reviews the current status and provides an overall perspective of parallel algorithms for solving problems arising in the major areas of numerical linear algebra, including (1) direct solution of dense, structured, or sparse linear systems, (2) dense or structured least squares computations, (3) dense or structured eigenvaluen and singular value computations, and (4) rapid elliptic solvers. The book emphasizes computational primitives whose efficient execution on parallel and vector computers is essential to obtain high performance algorithms.Consists of two comprehensive survey papers on important parallel algorithms for solving problems arising in the major areas of numerical linear algebra - direct solution of linear systems, least squares computations, eigenvalue and singular value computations, and rapid elliptic solvers, plus an extensive up-to-date bibliography (2,000 items) on related research.
1 625 kr
Skickas inom 10-15 vardagar
Although the origins of parallel computing go back to the last century, it was only in the 1970s that parallel and vector computers became available to the scientific community. The first of these machines-the 64 processor llliac IV and the vector computers built by Texas Instruments, Control Data Corporation, and then CRA Y Research Corporation-had a somewhat limited impact. They were few in number and available mostly to workers in a few government laboratories. By now, however, the trickle has become a flood. There are over 200 large-scale vector computers now installed, not only in government laboratories but also in universities and in an increasing diversity of industries. Moreover, the National Science Foundation's Super computing Centers have made large vector computers widely available to the academic community. In addition, smaller, very cost-effective vector computers are being manufactured by a number of companies. Parallelism in computers has also progressed rapidly. The largest super computers now consist of several vector processors working in parallel. Although the number of processors in such machines is still relatively small (up to 8), it is expected that an increasing number of processors will be added in the near future (to a total of 16 or 32). Moreover, there are a myriad of research projects to build machines with hundreds, thousands, or even more processors. Indeed, several companies are now selling parallel machines, some with as many as hundreds, or even tens of thousands, of processors.
Scientific Computing and Differential Equations: An Introduction to Numerical Methods
Häftad, Engelska, 1991
561 kr
Skickas inom 7-10 vardagar
Scientific Computing
Eine Einführung in das wissenschaftliche Rechnen und Parallele Numerik
Häftad, Tyska, 1996
452 kr
Skickas inom 10-15 vardagar