Handbook of Granular Computing by Witold Pedrycz, Andrzej Skowron, and Vladik Kreinovich (Wiley) Although the notion is a relatively recent one, the notions and principles of Granular Computing (GrC) have appeared in a different guise in many related fields including granularity in Artificial Intelligence, interval computing, cluster analysis, quotient space theory and many others. Recent years have witnessed a renewed and expanding interest in the topic as it begins to play a key role in bioinformatics, e-commerce, machine learning, security, data mining and wireless mobile computing when it comes to the issues of effectiveness, robustness and uncertainty.
Handbook of Granular Computing offers a comprehensive reference source for the granular computing community, edited by and with contributions from leading experts in the field.
Handbook of Granular Computing represents a significant and
valuable contribution to the literature and will appeal to a broad
audience including researchers, students and practitioners in the
fields of Computational Intelligence, pattern recognition, fuzzy
sets and neural networks, system modelling, operations research and
bioinformatics.
In Dissertio de Arte Combinatoria by Gottfried Wilhelm Leibniz (1666), one can find the following sentences: 'If controversies were to arise, there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in their hands, and say to each other: "Let us calculate" ' and in New Essays on Human Understanding (1705), 'Languages are the best mirror of the human mind, and that a precise analysis of the signification of words would tell us more than anything else about the operations of the understanding.' Much later, methods based on fuzzy sets, rough sets, and other soft computing paradigms allowed us to understand that for calculi of thoughts discussed by Leibniz, it is necessary to develop tools for approximate reasoning about vague, non-crisp concepts. For example, human is expressing higher level perceptions using vague, non-Boolean concepts. Hence, for developing truly intelligent methods for approximate reasoning about such concepts in two-valued accessible for intelligent systems languages should be developed. One can gain in searching for solutions of tasks related to perceptions by using granular computing (GC). This searching in GC becomes feasible because GC-based methods use the fact that the solutions satisfy non-Boolean specifications to a satisfactory degree only. Solutions in GC can often be constructed more efficiently than in the case of methods searching for detailed, purely numeric solutions. Relevant granulation leads to efficient solutions that are represented by granules matching specifications to satisfactory degrees.
In an inductive approach to knowledge discovery, information granules provide a means of encapsulating perceptions about objects of interest.
No matter what problem is taken into consideration, we usually cast it into frameworks that facilitate observations about clusters of objects with common features and lead to problem formulation and problem solving with considerable acuity. Such frameworks lend themselves to problems of feature selection and feature extraction, pattern recognition, and knowledge discovery. Identification of relevant features of objects contained in information granules makes it possible to formulate hypotheses about the significance of the objects, construct new granules containing sample objects during interactions with the environment, use GC to measure the nearness of complex granules, and identify infomorphisms between systems of information granules.
Consider, for instance, image processing. In spite of the continuous progress in the area, a human being assumes a dominant and very much uncontested position when it comes to understanding and interpreting images.
Surely, we do not focus our attention on individual pixels but rather transform them using techniques such as non-linear diffusion and group them together in pixel windows (complex objects) relative to selected features. The parts of an image are then drawn together in information granules containing objects (clusters of pixels) with vectors of values of functions representing object features that constitute information granule descriptions. This signals a remarkable trait of humans that have the ability to construct information granules, compare them, recognize patterns, transform and learn from them, arrive at explanations about perceived patterns, formulate assertions, and construct approximations of granules of objects of interest.
As another example, consider a collection of time series. From our perspective we can describe them in a semiqualitative manner by pointing at specific regions of such signals. Specialists can effortlessly interpret ECG signals. They distinguish some segments of such signals and interpret their combinations.
Experts can seamlessly interpret temporal readings of sensors and assess the status of the monitored system. Again, in all these situations, the individual samples of the signals are not the focal point of the analysis and the ensuing signal interpretation. We always granulate all phenomena (no matter if they are originally discrete or analog in their nature). Time is another important variable that is subjected to granulation. We use milliseconds, seconds, minutes, days, months, and years. Depending on specific problem we have in mind and who the user is, the size of the information granules (time intervals) can vary quite dramatically. To the high-level management, time intervals of quarters of year or a few years can be meaningful temporal information granules on basis of which one develops any predictive model. For those in charge of everyday operation of a dispatching plant, minutes and hours could form a viable scale of time granulation. For the designer of high-speed integrated circuits and digital systems, the temporal information granules concern nanoseconds, microseconds, and, perhaps, milliseconds. Even such commonly encountered and simple examples are convincing enough to lead us to ascertain that (a) information granules are the key components of knowledge representation and processing, (b) the level of granularity of information granules (their size, to be more descriptive) becomes crucial to problem description and an overall strategy of problem solving, (c) there is no universal level of granularity of information; the size of granules is problem oriented and user dependent.
What has been said so far touched a qualitative aspect of the problem. The challenge is to develop a computing framework within which all these representation and processing endeavors can be formally realized. The common platform emerging within this context comes under the name of granular computing. In essence, it is an emerging paradigm of information processing that has its roots in Leibnitz's ideas [1] in Cantor's set theory, Zadeh's fuzzy information granulation [8], and Pawlak's disovery of elementary sets [9] (see also [10-14]).
While we have already noticed a number of important conceptual and computational constructs built in the domain of system modeling, machine learning, image processing, pattern recognition, and data compression in which various abstractions (and ensuing information granules) came into existence, GC becomes innovative and intellectually proactive in several fundamental ways:
Handbook of Granular Computing is one of the first, if not the first, comprehensive compendium on GC. There are several fundamental goals of this project. First, by capitalizing on several fundamental and well-established frameworks of fuzzy sets, interval analysis, and rough sets, we build unified foundations of computing with information granules. Second, we offer the reader a systematic and coherent exposure of the concepts, design methodologies, and detailed algorithms. In general, we decided to adhere to the top-down strategy of the exposure of the material by starting with the ideas along with some motivating notes and afterward proceeding with the detailed design that materializes in specific algorithms, applications, and case studies.
The editors have made the handbook self-contained to a significant extent. While an overall knowledge of 3C and its subdisciplines would be helpful, the reader is provided with all necessary prerequisites. If suitable, we have augmented some parts of the material with a step-by-step explanation of more advanced :concepts supported by a significant amount of illustrative numeric material.
They are strong proponents of the down-to-earth presentation of the material. While they maintain a ertain required level of formalism and mathematical rigor, the ultimate goal is to present the material so that it also emphasizes its applied side (meaning that the reader becomes fully aware of direct implications of the presented algorithms, modeling, and the like).
This Handbook of Granular Computing is aimed at a broad audience of researchers and practitioners. Owing to the nature of the material being covered and the way it is organized, we hope that it will appeal to the well-established communities including those active in computational intelligence (CI), pattern recognition, machine learning, fuzzy sets, neural networks, system modeling, and operations research. The research topic can be treated in two different ways. First, as one the emerging and attractive areas of CI and GC, thus attracting researchers engaged in some more specialized domains. Second, viewed as an enabling technology whose contribution goes far beyond the communities and research areas listed above, we envision a genuine interest from a vast array of research disciplines (engineering, economy, bioinformatics, etc).
The editors also hope that the handbook will also serve as a highly useful reference material for graduate students and senior undergraduate students in a variety of courses on CI, artificial intelligence, pattern recognition, data analysis, system modeling, signal processing, operations research, numerical methods, and knowledge-based systems.
In the organization of the material they followed a top-down approach by splitting the content into four main parts. The first one, fundamentals and methodology, covers the essential background of the leading contributing technologies of GC, such as interval analysis, fuzzy sets, and rough sets. They also offer a comprehensive coverage of the underlying concepts along with their interpretation. They also elaborate on the representative techniques of GC. A special attention is paid to the development of granular constructs, say, fuzzy sets, that serve as generic abstract constructs reflecting our perception of the world and a way of an effective problem solving. A number of highly representative algorithms (say, cognitive maps) are presented. Next, in Part II, they move on the hybrid constructs of GC where a variety of symbiotic developments of information granules, such as interval-valued fuzzy sets, type-2 fuzzy sets and shadowed sets, are considered. In the last part, they concentrate on a diversity of applications and case studies.
Granular Computing: An Introduction (The Springer International Series in Engineering and Computer Science) — co-authored by professors A. Bargiela and W. Pedrycz, and published in 2003. (Kluwer Academic Publishers, Dordercht, 2003)— was the first book on granular computing. It was a superlative work in all respects. Handbook of Granular Computing is a worthy successor. Significantly, the co-editors of the handbook, Professors Pedrycz, Skowron, and Kreinovich are, respectively, the leading contributors to the closely interrelated fields of granular computing, rough set theory, and interval analysis — an interrelationship which is accorded considerable attention in the handbook. The articles in the handbook are divided into three groups: foundations of granular computing, interval analysis, fuzzy set theory, and rough set theory; hybrid methods and models of granular computing; and applications and case studies. One cannot but be greatly impressed by the vast panorama of applications extending from medical informatics and data mining to time-series forecasting and the Internet. Throughout the handbook, the exposition is aimed at reader friendliness and deserves high marks in all respects.
What is granular computing? The preface and the chapters of this handbook provide a comprehensive answer to this question. In the following, I take the liberty of sketching my perception of granular computing — a perception in which the concept of a generalized constraint plays a pivotal role. An earlier view may be found in my 1998 paper 'Some reflections on soft computing, granular computing and their roles in the conception, design and utilization of information/intelligent systems'.
Basically, granular computing differs from conventional modes of computation in that the objects of computation are not values of variables but information about values of variables. Furthermore, information is allowed to be imperfect; i.e., it may be imprecise, uncertain, incomplete, conflicting, or partially true. It is this facet of granular computing that endows granular computing with a capability to deal with real-world problems which are beyond the reach of bivalent-logic-based methods which are intolerant of imprecision and partial truth. In particular, through the use of generalized-constraintbased semantics, granular computing has the capability to compute with information described in natural language.
Granular computing is based on fuzzy logic. There are many misconceptions about fuzzy logic. To begin with, fuzzy logic is not fuzzy. Basically, fuzzy logic is a precise logic of imprecision. Fuzzy logic is inspired by two remarkable human capabilities. First, the capability to reason and make decisions in an environment of imprecision, uncertainty, incompleteness of information, and partiality of truth. And second, the capability to perform a wide variety of physical and mental tasks based on perceptions, without any measurements and any computations. The basic concepts of graduation and granulation form the core of fuzzy logic, and are the principal distinguishing features of fuzzy logic. More specifically, in fuzzy logic everything is or is allowed to be graduated, i.e., be a matter of degree or, equivalently, fuzzy. Furthermore, in fuzzy logic everything is or is allowed to be granulated, with a granule being a clump of attribute values drawn together by indistinguishability, similarity, proximity, or functionality. The concept of a generalized constraint serves to treat a granule as an object of computation. Graduated granulation, or equivalently fuzzy granulation, is a unique feature of fuzzy logic. Graduated granulation is inspired by the way in which humans deal with complexity and imprecision.
The concepts of graduation, granulation, and graduated granulation play key roles in granular computing. Graduated granulation underlies the concept of a linguistic variable, i.e., a variable whose values are words rather than numbers. In retrospect, this concept, in combination with the associated concept of a fuzzy if—then rule, may be viewed as a first step toward granular computing.
Today, the concept of a linguistic variable is used in almost all applications of fuzzy logic. When I introduced this concept in my 1973 paper 'Outline of a new approach to the analysis of complex systems and decision processes', I was greeted with scorn and derision rather than with accolades. The derisive comments reflected a deep-seated tradition in science — the tradition of according much more respect to numbers than to words. Thus, in science, progress is equated to progression from words to numbers. In fuzzy logic, in moving from numerical to linguistic variables, we are moving in a countertraditional direction. What the critics did not understand is that in moving in the countertraditional direction, we are sacrificing precision to achieve important advantages down the line. This is what is called 'the fuzzy logic gambit.' The fuzzy logic gambit is one of the principal rationales for the use of granular computing.
In sum, to say that the Handbook of Granular Computing is an important contribution to the literature is an understatement. It is a work whose importance cannot be exaggerated. The coeditors, the authors, and the publisher, John Wiley, deserve our thanks, congratulations, and loud applause.
Lotfi A. Zadeh Berkeley, California
Computers: Understanding Technology 2nd Edition, with CD-ROM by Floyd Fuller, Brian Larson (EMC/Paradigm Publishing) describes the role of computers in our lives and in society, and covers various aspects of computer hardware (including input, processing, output, and storage), system and application software, telecommunications and networks, databases and information management, applications design and programming, security and ethics, and careers. A companion CD-ROM contains videos illustrating key points, projects and tutorials, self-tests, and a chronology of computer development. Fuller teaches at the Appalachian State University; Larson, at California State University- Stanislaus.
For millions of people worldwide, the computer and the Internet have become an integral and essential part of life. In the home, we use computers to communicate quickly with family and friends, manage our finances more effectively, enjoy music and games, shop online for products and services, and much more. In the workplace, computers have become an almost indispensable tool. With them, workers can become more efficient, productive, and creative, and companies can connect almost instantly with suppliers and partners on the other side of the world.
Studying this book will help prepare you for the workplace of today—and tomorrow in which some level of computer skills is often an essential requirement for employment. Employees who continually try to improve their skills have an advantage over those who do not. Some would even argue that understanding technology has become a survival skill. This book will help you become a survivor.
As with the first edition, the goal of this new edition of Computers: Understanding Technology is to introduce you to the key information technology concepts and the vital technical skills that can help improve your personal and professional lives. In planning the changes for the second edition, we con-ducted focus groups and used their input to create a state-of-the-art computer concepts product that will enhance the teaching and learning experience.
A major change with the second edition is that the three books in the series represent divisions of the identical chapter content into groups to match the three most common computer concepts course lengths. The Comprehensive book consists of chapters 1-15, the Introductory book consists of chapters 1-9, and the Brief book includes chapters 1-5. Additionally, the order of chapters has changed minimally and the two e-commerce chapters from the first edition have been combined into one. The new topic order is as follows:
Chapter 1: Our Digital World
Chapter 2: Input and Processing
Chapter 3: Output and Storage
Chapter 4: System Software
Chapter 5: Application Software
Chapter 6: Telecommunications and Networks
Chapter 7: The Internet and the World Wide Web
Chapter 8: Using Databases to Manage Information
Chapter 9: Understanding Information Systems
Chapter 10: Electronic Commerce
Chapter 11: Programming Concepts and Languages
Chapter 12: Multimedia and Artificial Intelligence
Chapter 13: Security Strategies and Systems
Chapter 14: Computer Ethics
Chapter 15: Information Technology Careers
Special Features
To more precisely meet the needs of the varied introductory computer courses across the country, we have developed eight Special Features that are included in various combinations in the three books. These succinct, well illustrated overviews explain the major concepts within the eight topics:
Buying and Installing a PC
Adding Software and Hardware Components to Your PC
Networks and Telecommunications
The Internet and the World Wide Web
Computer Ethics
Building a Web Site
Security Issues and Strategies
Using XML to Share Information
The first five Special Features appear in the Brief book; the Introductory book includes the first two Special Features plus the Computer Ethics, Building a Web Site, and Security Issues and Strategies features. The Comprehensive book includes the two PC features plus the topics of Building a Web Site and Using XML to Share Information.
Additional Application Exercises: Windows and Internet Tutorials
Recognizing the crucial need for students to be able to use Windows efficiently and effectively, we have developed a set of 15 Windows XP Tutorials that teach the core computer management skills. Students can work through the group of tutorials in one sitting, or they can work through them one at a time as the first activity in the end-of-chapter exercises. The Windows Tutorials appear with the Internet Tutorials at the end of the book.
New Concepts Exercises
New to this edition are three exercises that help expand and reinforce student comprehension of the chapter content. "Knowledge Check" is a set of multiple-choice questions; "Tech Architecture" offers a drawing that students label; and "Ethical Dilemmas" poses an ethical issue that students discuss and debate.
USING THE ENCORE! CD-ROM AND THE INTERNET RESOURCE CENTER
Included with the textbook is a multimedia CD-ROM that adds an experiential and interactive dimension to the learning of fundamental computer concepts. For every chapter, the CD offers
Tech Tutors: Brief, animated Flash segments that bring key topics to life
Quizzes: Multiple-choice tests available in both Practice and Test modes with scores reported to the student and instructor by e-mail
Glossary: Key terms and definitions combined with related illustrations from the text
Image Bank: Illustrations of concepts and processes accompanied by the related terms and definitions
Additionally, the Encore! CD includes a comprehensive set of computer literacy tutorials called Tech Review, which are accessible at any time and within any chapter.
The CD may be used as a preview or as a sequel to each chapter—or both. That is, you can play each chapter's Flash animations (or videos) to get an overview of what is taught in the book and then study the text chapter before returning to the CD for its enriching content and interactivity. Or, you cancomplete a chapter and then complete the corresponding chapter on the CD. Either way, you will benefit from working with this integrated multimedia CD-ROM and will find an approach that suits your learning style.
To further address the dynamic and ever-changing nature of computer technology, additional readings, projects, and activities for each chapter are provided on the Internet Resource Center at www.emcp.com. Look for the title of the book under the list of Resource Centers and prepare for some stimulating reading and activities.
Embedded Systems and Computer Architecture by Graham, R.
Wilson
(Newnes) Designed as an introduction to microprocessors and computer
architecture for an electronics undergraduate or HND/C course. A practical
design-oriented approach. A core text for modules on microprocessors, embedded
systems and computer architecture -A practical design-orientated approach -FREE
CD-ROM features a unique microprocessor simulator This book has been designed as
an introduction to microprocessors and computer architecture for an electronics
undergraduate or HND/C course. It differs from other books available in that it
uses a design-orientated approach rather than a purely descriptive style
dedicated to a particular commercial microprocessor.
The accompanying suite of programs includes interactive
animations of digital circuits and an integrated development system, IDE. The
IDE includes a graphical simulator of a microprocessor, down to the
micro-operation level. The reader can observe and test their own system designs,
with a variety of peripheral devices, on their own PC with the facilities
normally found in a microprocessor systems laboratory.
The Compiler Design Handbook: Optimizations & Machine Code Generation by Y.
N. Srikant, Priti Shankar (CRC Press) The first up to date
handbook for advanced compiler optimizations and code generation, Features
chapters contributed by leading experts and active researchers in the field.
The widespread use of object-oriented languages and Internet security concerns
are just the beginning. Add embedded systems, multiple memory banks, highly
pipelined units operating in parallel, and a host of other advances and it
becomes clear that current and future computer architectures pose immense
challenges to compiler designers-challenges that already exceed the capabilities
of traditional compilation techniques.
The Compiler Design Handbook is designed to help you meet those challenges.
Written by top researchers and designers from around the world, it presents
detailed, up-to-date discussions on virtually all aspects of compiler
optimizations and code generation. It covers a wide range of advanced topics,
focusing on contemporary architectures such as VLIW, superscalar,
multiprocessor, and digital signal processing. It also includes detailed
presentations that highlight the different techniques required for optimizing
programs written in parallel and those written in object-oriented languages.
Each chapter is self-contained, treats its topic in depth, and includes a
section of future research directions.Compiler design has always been a highly
specialized subject with a fine blend of intricate theory and difficult
implementation. Yet compilers play an increasingly vital role in the quest for
improved performance. With its careful attention to the most researched,
difficult, and widely discussed topics in compiler design, The Compiler Design
Handbook offers a unique opportunity for designers and researchers to update
their knowledge, refine their skills, and prepare for future innovations.
Contents: Dataflow Analysis, Uday Khedkar, Indian Institute
of Technology, Bombay, India; Automatic Generation of Code Optimizers from
Formal Specifications, Vineeth Kumar Paleri, Regional Engineering College,
Calicut, India; Scalar Compiler Optimizations on the SSA Form and the Flowgraph,
Y.N. Srikant, Indian Institute of Science, Bangalore India; Profile-Guided
Compiler Optimizations, Rajiv Gupta, University of Arizona, USA, Eduard Mehofer,
Institute for Software Science, Austria, and Youtao Zhang, University of
Arizona, USA; Shape Analysis and Applications, Reinhard Wilhelm, Universitaet
des Saarlandes, Germany, Thomas Reps, University of Wisconsin-Madison, USA, and
Mooly Sagiv, Tel Aviv University, Israel; Optimizations for Object-Oriented
Languages, Andreas Krall, Inst. fur Computersprachen, Austria and Nigel
Horspool, University of Victoria, BC, Canada; Data Flow Testing, Rajiv Gupta and
Neelam Gupta, University of Arizona, USA; Program Slicing, G. B. Mund, D Goswami
and Rajib Mall, Indian Institute of Technology, Kharagpur, India; Debuggers for
Programming Languages, Sanjeev Kumar Aggarwal and M Sarath Kumar, Indian
Institute of Technology, Kanpur, India; Dependence Analysis and Parallelizing
Transformations, Rajopadhye, Colorado State University, USA; Compilation for
Distributed Memory Architectures, Alok Choudhary, Northwestern University, USA
and Mahmut Kandemir, Pennsylvania State University, USA; Automatic Data
Distribution, J. Ramanujam, Louisiana State University, USA; Register
Allocation, K. Gopinath, Indian Institute of Science, Bangalore, India;
Architecture Description Languages for Retargetable Compilation, Sharad Malik
and Wei Qin, Princeton University, USA; Instruction Selection using Tree
Parsing, Priti Shankar, Indian Institute of Science, Bangalore, India; A
Retargetable VLIW Compiler Framework for DSPs, Sharad Malik and S. Rajagopalan,
Princeton University, USA; Instruction Scheduling, R. Govindarajan, Indian
Institute of Science, Bangalore, India; Software Pipelining, Vicki H. Allan,
Utah State University, USA;
Dynamic Compilation, Evelyn Duesterwald, Hewlett Packard Laboratories, USA;
Compiling Safe Mobile Code, R. Venugopal, Hewlett-Packard India Software
Operation Ltd., India and Ravindra B. Keskar, Sasken Communication Technologies
Ltd., India; Type Systems in Programming Languages, Ramesh Subrahmanyam, Burning
Glass Technologies, USA; An Introduction to Operational Semantics, Sanjeeva
Prasad and S. Arun Kumar, Indian Institute of Technology, Delhi, India
insert content here