Intelligent Robotics and Applications
The volume set LNAI 11740 until LNAI 11745 constitutes the proceedings of the 12th International Conference on Intelligent Robotics and Applications, ICIRA 2019, held in Shenyang, China, in August 2019. The total of 378 full and 25 short papers presented in these proceedings was carefully reviewed and selected from 522 submissions. The papers are organized in topical sections as follows: Part I: collective and social robots; human biomechanics and human-centered robotics; robotics for cell manipulation and characterization; field robots; compliant mechanisms; robotic grasping and manipulation with incomplete information and strong disturbance; human-centered robotics; development of high-performance joint drive for robots; modular robots and other mechatronic systems; compliant manipulation learning and control for lightweight robot. Part II: power-assisted system and control; bio-inspired wall climbing robot; underwater acoustic and optical signal processing for environmental cognition; piezoelectric actuators and micro-nano manipulations; robot vision and scene understanding; visual and motional learning in robotics; signal processing and underwater bionic robots; soft locomotion robot; teleoperation robot; autonomous control of unmanned aircraft systems. Part III: marine bio-inspired robotics and soft robotics: materials, mechanisms, modelling, and control; robot intelligence technologies and system integration; continuum mechanisms and robots; unmanned underwater vehicles; intelligent robots for environment detection or fine manipulation; parallel robotics; human-robot collaboration; swarm intelligence and multi-robot cooperation; adaptive and learning control system; wearable and assistive devices and robots for healthcare; nonlinear systems and control. Part IV: swarm intelligence unmanned system; computational intelligence inspired robot navigation and SLAM; fuzzy modelling for automation, control, and robotics; development of ultra-thin-film, flexible sensors, and tactile sensation; robotic technology for deep space exploration; wearable sensing based limb motor function rehabilitation; pattern recognition and machine learning; navigation/localization. Part V: robot legged locomotion; advanced measurement and machine vision system; man-machine interactions; fault detection, testing and diagnosis; estimation and identification; mobile robots and intelligent autonomous systems; robotic vision, recognition and reconstruction; robot mechanism and design. Part VI: robot motion analysis and planning; robot design, development and control; medical robot; robot intelligence, learning and linguistics; motion control; computer integrated manufacturing; robot cooperation; virtual and augmented reality; education in mechatronics engineering; robotic drilling and sampling technology; automotive systems; mechatronics in energy systems; human-robot interaction.
Towards Autonomous Robotic Systems
The two volumes LNAI 11649 and LNAI 11650 constitute the refereed proceedings of the 20th Annual Conference "Towards Autonomous Robotics", TAROS 2019, held in London, UK, in July 2019.The 74 full papers and 12 short papers presented were carefully reviewed and selected from 101 submissions. The papers present and discuss significant findings and advances in autonomous robotics research and applications. They are organized in the following topical sections: robotic grippers and manipulation; soft robotics, sensing and mobile robots; robotic learning, mapping and planning; human-robot interaction; and robotic systems and applications.
Comsol Heat Transfer Models
This book guides the reader through the process of model creation for heat transfer analysis with the finite element method. Describing thermal imaging experiments that demonstrate how such models can be validated, it presents application examples, such as heating water in a kettle, to basement insulation, a heated seat, molten rock, pipe flow, and an innovative extended surface. A companion disc provides the files so models can be run (using COMSOL or other software) in order to observe real-world behavior of the applications.
Data Science from Scratch
To really learn data science, you should not only master the tools--data science libraries, frameworks, modules, and toolkits--but also understand the ideas and principles underlying them. Updated for Python 3.6, this second edition of Data Science from Scratch shows you how these tools and algorithms work by implementing them from scratch. If you have an aptitude for mathematics and some programming skills, author Joel Grus will help you get comfortable with the math and statistics at the core of data science, and with the hacking skills you need to get started as a data scientist. Packed with new material on deep learning, statistics, and natural language processing, this updated book shows you how to find the gems in today's messy glut of data. Get a crash course in Python Learn the basics of linear algebra, statistics, and probability--and how and when they're used in data science Collect, explore, clean, munge, and manipulate data Dive into the fundamentals of machine learning Implement models such as k-nearest neighbors, Na簿ve Bayes, linear and logistic regression, decision trees, neural networks, and clustering Explore recommender systems, natural language processing, network analysis, MapReduce, and databases
Robocup 2017: Robot World Cup XXI
This book includes the post-conference proceedings of the 21st RoboCup International Symposium, held in Nagoya, Japan, in September 2017. The 33 full revised papers and 9 papers from the winning teams presented were carefully reviewed and selected from 58 submissions. The papers are orginazed on topical sections on Robotics, Artificial intelligence, Environment perception, State estimation and much more.
Intelligent Robotics and Applications
The two volume set LNAI 10984 and LNAI 10985 constitutes the refereed proceedings of the 11th International Conference on Intelligent Robotics and Applications, ICIRA 2018, held in Newcastle, NSW, Australia, in August 2018. The 81 papers presented in the two volumes were carefully reviewed and selected from 129 submissions. The papers in the first volume of the set are organized in topical sections on multi-agent systems and distributed control; human-machine interaction; rehabilitation robotics; sensors and actuators; and industrial robot and robot manufacturing. The papers in the second volume of the set are organized in topical sections on robot grasping and control; mobile robotics and path planning; robotic vision, recognition and reconstruction; and robot intelligence and learning.
Graph Data Modeling for Nosql and SQL
Master a graph data modeling technique superior to traditional data modeling for both relational and NoSQL databases (graph, document, key-value, and column), leveraging cognitive psychology to improve big data designs. From Karen Lopez's Foreword: In this book, Thomas Frisendal raises important questions about the continued usefulness of traditional data modeling notations and approaches: Are Entity Relationship Diagrams (ERDs) relevant to analytical data requirements? Are ERDs relevant in the new world of Big Data? Are ERDs still the best way to work with business users to understand their needs? Are Logical and Physical Data Models too closely coupled? Are we correct in using the same notations for communicating with business users and developers? Should we refine our existing notations and tools to meet these new needs, or should we start again from a blank page? What new notations and approaches will we need? How will we use those to build enterprise database systems? Frisendal takes us through the history of data modeling, enterprise data models and traditional modeling methods. He points out, quite contentiously, where he feels we have gone wrong and in a few places where we got it right. He then maps out the psychology of meaning and context, while identifying important issues about where data modeling may or may not fit in business modeling. The main subject of this work is a proposal for a new exploration-driven modeling approach and new modeling notations for business concept models, business solutions models, and physical data models with examples on how to leverage those for implementing into any target database or datastore. These new notations are based on a property graph approach to modeling data. From the author's introduction: This book proposes a new approach to data modeling--one that "turns the inside out". For well over thirty years, relational modeling and normalization was the name of the game. One can ask that if normalization was the answer, what was the problem? There is something upside-down in that approach, as we will see in this book. Data analysis (modeling) is much like exploration. Almost literally. The data modeler wanders around searching for structure and content. It requires perception and cognitive skills, supported by intuition (a psychological phenomenon), that together determine how well the landscape of business semantics is mapped. Mapping is what we do; we explore the unknowns, draw the maps and post the "Here be Dragons" warnings. Of course there are technical skills involved, and surprisingly, the most important ones come from psychology and visualization (again perception and cognition) rather than pure mathematical ability. Two compelling events make a paradigm shift in data modeling possible, and also necessary: The advances in applied cognitive psychology address the needs for proper contextual framework and for better communication, also in data modeling, and The rapid intake of non-relational technologies (Big Data and NoSQL).
Data Analytics With Hadoop
Ready to use statistical and machine-learning techniques across large data sets? This practical guide shows you why the Hadoop ecosystem is perfect for the job. Instead of deployment, operations, or software development usually associated with distributed computing, you璽 ll focus on particular analyses you can build, the data warehousing techniques that Hadoop provides, and higher order data workflows this framework can produce. Data scientists and analysts will learn how to perform a wide range of techniques, from writing MapReduce and Spark applications with Python to using advanced modeling and data management with Spark MLlib, Hive, and HBase. You璽 ll also learn about the analytical processes and data systems available to build and empower data products that can handle璽 and actually require璽 huge amounts of data. Understand core concepts behind Hadoop and cluster computing Use design patterns and parallel analytical algorithms to create distributed data analysis jobs Learn about data management, mining, and warehousing in a distributed context using Apache Hive and HBase Use Sqoop and Apache Flume to ingest data from relational databases Program complex Hadoop and Spark applications with Apache Pig and Spark DataFrames Perform machine learning techniques such as classification, clustering, and collaborative filtering with Spark璽 s MLlib
Nosql and SQL Data Modeling
How do we design for data when traditional design techniques cannot extend to new database technologies? In this era of big data and the Internet of Things, it is essential that we have the tools we need to understand the data coming to us faster than ever before, and to design databases and data processing systems that can adapt easily to ever-changing data schemas and ever-changing business requirements. There must be no intellectual disconnect between data and the software that manages it. It must be possible to extract meaning and knowledge from data to drive artificial intelligence applications. Novel NoSQL data organization techniques must be used side-by-side with traditional SQL databases. Are existing data modeling techniques ready for all of this?The Concept and Object Modeling Notation (COMN) is able to cover the full spectrum of analysis and design. A single COMN model can represent the objects and concepts in the problem space, logical data design, and concrete NoSQL and SQL document, key-value, columnar, and relational database implementations. COMN models enable an unprecedented level of traceability of requirements to implementation. COMN models can also represent the static structure of software and the predicates that represent the patterns of meaning in databases.This book will teach you: the simple and familiar graphical notation of COMN with its three basic shapes and four line styles how to think about objects, concepts, types, and classes in the real world, using the ordinary meanings of English words that aren't tangled with confused techno-speak how to express logical data designs that are freer from implementation considerations than is possible in any other notation how to understand key-value, document, columnar, and table-oriented database designs in logical and physical terms how to use COMN to specify physical database implementations in any NoSQL or SQL database with the precision necessary for model-driven development A quick reference guide to COMN is included in an appendix. The full notation reference is available at http: //www.tewdur.com/.
Make Fpgas
What if you could use software to design hardware? Not just any hardware--imagine specifying the behavior of a complex parallel computer, sending it to a chip, and having it run on that chip--all without any manufacturing? With Field-Programmable Gate Arrays (FPGAs), you can design such a machine with your mouse and keyboard. When you deploy it to the FPGA, it immediately takes on the behavior that you defined. Want to create something that behaves like a display driver integrated circuit? How about a CPU with an instruction set you dreamed up? Or your very own Bitcoin miner You can do all this with FPGAs. Because you're not writing programs--rather, you're designing a chip whose sole purpose is to do what you tell it--it's faster than anything you can do in code. With Make: FPGAs, you'll learn how to break down problems into something that can be solved on an FPGA, design the logic that will run on your FPGA, and hook up electronic components to create finished projects.
Sams Teach Yourself Exchange Server 2003 in 10 Minutes
Rather than being a migration or architecture guide Sams Teach Yourself Exchange Server 2003 in 10 Minutes serves as a quick, practical guide to common tasks such as managing mailboxes, creating groups, creating routing groups, using maintenance tools, performing database maintenance, setting policies, mobility, Outlook Web access, group policy management, performance optimization, back up and restore operations, as well as troubleshooting.
Creating a Data-Driven Organization
What do you need to become a data-driven organization? Far more than having big data or a crack team of unicorn data scientists, it requires establishing an effective, deeply-ingrained data culture. This practical book shows you how true data-drivenness involves processes that require genuine buy-in across your company, from analysts and management to the C-Suite and the board. Through interviews and examples from data scientists and analytics leaders in a variety of industries, author Carl Anderson explains the analytics value chain you need to adopt when building predictive business models璽 from data collection and analysis to the insights and leadership that drive concrete actions. You璽 ll learn what works and what doesn璽 t, and why creating a data-driven culture throughout your organization is essential. Start from the bottom up: learn how to collect the right data the right way Hire analysts with the right skills, and organize them into teams Examine statistical and visualization tools, and fact-based story-telling methods Collect and analyze data while respecting privacy and ethics Understand how analysts and their managers can help spur a data-driven culture Learn the importance of data leadership and C-level positions such as chief data officer and chief analytics officer
Graph Databases
Discover how graph databases can help you manage and query highly connected data. With this practical book, you'll learn how to design and implement a graph database that brings the power of graphs to bear on a broad range of problem domains. Whether you want to speed up your response to user queries or build a database that can adapt as your business evolves, this book shows you how to apply the schema-free graph model to real-world problems. This second edition includes new code samples and diagrams, using the latest Neo4j syntax, as well as information on new functionality. Learn how different organizations are using graph databases to outperform their competitors. With this book's data modeling, query, and code examples, you'll quickly be able to implement your own solution. Model data with the Cypher query language and property graph model Learn best practices and common pitfalls when modeling with graphs Plan and implement a graph database solution in test-driven fashion Explore real-world examples to learn how and why organizations use a graph database Understand common patterns and components of graph database architecture Use analytical techniques and algorithms to mine graph database information
Make
The Photon is an open source, inexpensive, programmable, WiFi-enabled module for building connected projects and prototypes. Powered by an ARM Cortex-M3 microcontroller and a Broadcom WiFi chip, the Photon is just as happy plugged into a hobbyist's breadboard as it is into a product rolling off of an assembly line. While the Photon--and its accompanying cloud platform--is designed as a ready-to-go foundation for product developers and manufacturers, it's great for Maker projects, as you'll see in this book. You'll learn how to get started with the free development tools, deploy your sketches over WiFi, and build electronic projects that take advantage of the Photon's processing power, cloud platform, and input/output pins. What's more, the Photon is backward-compatible with its predecessor, the Spark Core.
Object-role Modeling Fundamentals
Object-Role Modeling (ORM) is a fact-based approach to data modeling that expresses the information requirements of any business domain simply in terms of objects that play roles in relationships. All facts of interest are treated as instances of attribute-free structures known as fact types, where the relationship may be unary (e.g. Person smokes), binary (e.g. Person was born on Date), ternary (e.g. Customer bought Product on Date), or longer. Fact types facilitate natural expression, are easy to populate with examples for validation purposes, and have greater semantic stability than attribute-based structures such as those used in Entity Relationship Modeling (ER) or the Unified Modeling Language (UML).All relevant facts, constraints and derivation rules are expressed in controlled natural language sentences that are intelligible to users in the business domain being modeled. This allows ORM data models to be validated by business domain experts who are unfamiliar with ORM's graphical notation. For the data modeler, ORM's graphical notation covers a much wider range of constraints than can be expressed in industrial ER or UML class diagrams, and thus allows rich visualization of the underlying semantics.Suitable for both novices and experienced practitioners, this book covers the fundamentals of the ORM approach. Written in easy-to-understand language, it shows how to design an ORM model, illustrating each step with simple examples. Each chapter ends with a practical lab that discusses how to use the freeware NORMA tool to enter ORM models and use it to automatically generate verbalizations of the model and map it to a relational database.
Practical Machine Learning
Building a simple but powerful recommendation system is much easier than you think. Approachable for all levels of expertise, this report explains innovations that make machine learning practical for business production settings--and demonstrates how even a small-scale development team can design an effective large-scale recommendation system. Apache Mahout committers Ted Dunning and Ellen Friedman walk you through a design that relies on careful simplification. You'll learn how to collect the right data, analyze it with an algorithm from the Mahout library, and then easily deploy the recommender using search technology, such as Apache Solr or Elasticsearch. Powerful and effective, this efficient combination does learning offline and delivers rapid response recommendations in real time. Understand the tradeoffs between simple and complex recommenders Collect user data that tracks user actions--rather than their ratings Predict what a user wants based on behavior by others, using Mahout for co-occurrence analysis Use search technology to offer recommendations in real time, complete with item metadata Watch the recommender in action with a music service example Improve your recommender with dithering, multimodal recommendation, and other techniques
Data Modeling for Mongodb
Master how to data model MongoDB applications.Congratulations! You completed the MongoDB application within the given tight timeframe and there is a party to celebrate your application's release into production. Although people are congratulating you at the celebration, you are feeling some uneasiness inside. To complete the project on time required making a lot of assumptions about the data, such as what terms meant and how calculations are derived. In addition, the poor documentation about the application will be of limited use to the support team, and not investigating all of the inherent rules in the data may eventually lead to poorly-performing structures in the not-so-distant future.Now, what if you had a time machine and could go back and read this book. You would learn that even NoSQL databases like MongoDB require some level of data modeling. Data modeling is the process of learning about the data, and regardless of technology, this process must be performed for a successful application. You would learn the value of conceptual, logical, and physical data modeling and how each stage increases our knowledge of the data and reduces assumptions and poor design decisions.Read this book to learn how to do data modeling for MongoDB applications, and accomplish these five objectives: Understand how data modeling contributes to the process of learning about the data, and is, therefore, a required technique, even when the resulting database is not relational. That is, NoSQL does not mean NoDataModeling! Know how NoSQL databases differ from traditional relational databases, and where MongoDB fits. Explore each MongoDB object and comprehend how each compares to their data modeling and traditional relational database counterparts, and learn the basics of adding, querying, updating, and deleting data in MongoDB. Practice a streamlined, template-driven approach to performing conceptual, logical, and physical data modeling. Recognize that data modeling does not always have to lead to traditional data models! Distinguish top-down from bottom-up development approaches and complete a top-down case study which ties all of the modeling techniques together.
Cloud Computing and SOA Convergence in Your Enterprise
"In this book, David Linthicum does that rarest of things: he manages to combine showing why SOA and Cloud Computing complement one another with a lucid game plan of how a business can take advantage of the synergies between them in concrete ways that will contribute to the bottom line." --Jeremy GeelanConference Chair, Cloud Computing Conference and Expo seriesSr. VP, SYS-CON Media and Events Massive, disruptive change is coming to IT as Software as a Service (SaaS), SOA, mashups, Web 2.0, and cloud computing truly come of age. Now, one of the world's leading IT innovators explains what it all means--coherently, thoroughly, and authoritatively. Writing for IT executives, architects, and developers alike, world-renowned expert David S. Linthicum explains why the days of managing IT organizations as private fortresses will rapidly disappear as IT inevitably becomes a global community. He demonstrates how to run IT when critical elements of customer, product, and business data and processes extend far beyond the firewall--and how to use all that information to deliver real-time answers about everything from an individual customer's credit to the location of a specific cargo container. Cloud Computing and SOA Convergence in Your Enterprise offers a clear-eyed assessment of the challenges associated with this new world--and offers a step-by-step program for getting there with maximum return on investment and minimum risk. Using multiple examples, Linthicum: Reviews the powerful cost, value, and risk-related drivers behind the move to cloud computing--and explains why the shift will accelerate Explains the technical underpinnings, supporting technologies, and best-practice methods you'll need to make the transition Helps you objectively assess the promise of SaaS, Web 2.0, and SOA for your organization, quantify value, and make the business case Walks you through evaluating your existing IT infrastructure and finding your most cost-effective, safest path to the "cloud" Shows how to choose the right candidate data, services, and processes for your cloud computing initiatives Guides you through building disruptive infrastructure and next-generation process platforms Helps you bring effective, high-value governance to the clouds If you're ready to begin driving real competitive advantage from cloud computing, this book is the start-to-finish roadmap you need to make it happen.
Development With the Force.com Platform
Master Force.com, Today's Fastest, Most Flexible Cloud Development Platform With Salesforce.com's Force.com platform, you can build and deploy powerful cloud-based enterprise applications faster than ever before. Now, Jason Ouellette gives you all the practical, technical guidance you need to make the most of the newest Force.com releases in your own custom cloud applications. Throughout, he adds new code and updated best practices for rapidly prototyping, building, and testing production-quality Force.com solutions. This edition's extensive new coverage includes Developer Console, JSON, Streaming and Tooling APIs, Bulk API, Force.com Canvas, REST integration, support for Web MVC frameworks, Dynamic Apex and Visualforce, and an all-new chapter on mobile user interfaces. Ouellette covers the entire platform: UIs, database design, analytics, security, and many other topics. His code examples emphasize maintainability, flexibility, and seamless integration--and you can run and adapt all of them with a free Force.com Developer Edition account. Coverage includes: Leveraging Force.com's customizable infrastructure to deliver advanced Platform-as-a-Service (PaaS) solutions Understanding Force.com's unique processes, tools, and architecture Developing a complete application, from requirements and use cases through deployment Using the Force.com database as a framework for highly flexible, maintainable applications Applying Force.com's baked-in security, including user identity, data ownership, and fine-grained access control Constructing powerful business logic with Apex, SOQL, and SOSL Adopting asynchronous actions, Single Page Applications, and other advanced features in Web user interfaces Building intuitive user interfaces with Visualforce, and extending them to public-facing websites and mobile devices Creating smartphone/tablet-friendly apps with HTML5 and Visualforce Performing massive data-intensive tasks offline with Batch Apex Using Force.com integration options, including REST, SOAP, Canvas, and the Streaming, Bulk, Tooling, and Metadata APIs Developing internal social applications with Force.com's Chatter collaboration tools If you're already building Web or mobile applications, take your next giant step into enterprise cloud development--with Development with the Force.com Platform, Third Edition. All code examples in this book are available on Github at http: //goo.gl/fjRqMX, and as a Force.com IDE project on Github at https: //github.com/jmouel/dev-with-force-3e.
Data Engineering
If you found a rusty old lamp on the beach, and upon touching it a genie appeared and granted you three wishes, what would you wish for? If you were wishing for a successful application development effort, most likely you would wish for accurate and robust data models, comprehensive data flow diagrams, and an acute understanding of human behavior.The wish for well-designed conceptual and logical data models means the requirements are well-understood and that the design has been built with flexibility and extensibility leading to high application agility and low maintenance costs. The wish for detailed data flow diagrams means a concrete understanding of the business' value chain exists and is documented. The wish to understand how we think means excellent team dynamics while analyzing, designing, and building the application.Why search the beaches for genie lamps when instead you can read this book? Learn the skills required for modeling, value chain analysis, and team dynamics by following the journey the author and son go through in establishing a profitable summer lemonade business. This business grew from season to season proportionately with his adoption of important engineering principles. All of the concepts and principles are explained in a novel format, so you will learn the important messages while enjoying the story that unfolds within these pages.The story is about an old man who has spent his life designing data models and databases and his newly adopted son. Father and son have a 54 year age difference that produces a large generation gap. The father attempts to narrow the generation gap by having his nine-year-old son earn his entertainment money. The son must run a summer business that turns a lemon grove into profits so he can buy new computers and games. As the son struggles for profits, it becomes increasingly clear that dad's career in information technology can provide critical leverage in achieving success in business. The failures and successes of the son's business over the summers are a microcosm of the ups and downs of many enterprises as they struggle to manage information technology.
UML Database Modeling
With our appetites for data on the rise, it has become more important than ever to use UML (Unified Modeling Language) to capture and precisely represent all of these data requirements. Learn how to construct UML data models by working through a series of exercises and self-assessment tests. Beginners can learn the UML directly. Experienced modelers can leverage their understanding of existing database notations, as the book extensively compares the UML to traditional data modeling (Information Engineering). Discover a new way of representing data requirements and communicating better with your business customers. Understand what UML constructs mean and how to properly use them. Learn subtleties of the UML. Become a power UML developer. Practice constructing data models with the exercises. The back of the book answers every exercise. Assess your mastery of the material. Each part has a multiple-choice test that can quantify your understanding. Improve your ability to abstract - think about different ways of representation - as you construct data models. Measure the quality of your data models. Be able to create database designs (DDL code) starting from a UML data model. Be able to write SQL database queries using a data model as a blueprint. Know the differences among operational models, data warehouse models, enterprise models, and master models. They are all aspects of data modeling. This book is concise and to the point. You will learn by induction through reading, practice, and feedback.
Analyzing the Analyzers
Despite the excitement around "data science," "big data," and "analytics," the ambiguity of these terms has led to poor communication between data scientists and organizations seeking their help. In this report, authors Harlan Harris, Sean Murphy, and Marck Vaisman examine their survey of several hundred data science practitioners in mid-2012, when they asked respondents how they viewed their skills, careers, and experiences with prospective employers. The results are striking. Based on the survey data, the authors found that data scientists today can be clustered into four subgroups, each with a different mix of skillsets. Their purpose is to identify a new, more precise vocabulary for data science roles, teams, and career paths. This report describes: Four data scientist clusters: Data Businesspeople, Data Creatives, Data Developers, and Data Researchers Cases in miscommunication between data scientists and organizations looking to hire Why "T-shaped" data scientists have an advantage in breadth and depth of skills How organizations can apply the survey results to identify, train, integrate, team up, and promote data scientists
Sharepoint 2013 How-To
SharePoint 2013 How-To Need fast, reliable, easy-to-implement solutions for SharePoint 2013? This book delivers exactly what you're looking for: step-by-step help and guidance with the tasks that users, authors, content managers, and site managers perform most often. Fully updated to reflect SharePoint 2013's latest improvements and fluid new design, it covers everything from lists and views to social networking, workflows, and security. The industry's most focused SharePoint resource, SharePoint 2013 How-To provides all the answers you need--now! Ishai Sagi is a SharePoint developer and architect who provides solutions through his company, Extelligent Design, which is Canberra, Australia's leading SharePoint consultancy. Sagi has worked with SharePoint since it was introduced in 2001. Honored four times by Microsoft as a Microsoft Office SharePoint Server MVP, he has trained many end users, administrators, and developers in using SharePoint or developing solutions for it. He leads Canberra's SharePoint user group and has spoken at Microsoft conferences around the world. He hosts the popular blog Sharepoint Tips and Tricks (www.sharepoint-tips.com), and authored SharePoint 2010 How-To. Fast, Accurate, and Easy-to-Use! 聶 Quickly review essential SharePoint terminology and concepts 聶 Master SharePoint 2013's revamped interface for Windows PCs, Surface, and smartphones 聶 Run SharePoint in the cloud with Microsoft Office 365 and SkyDrive 聶 Find, log on to, and navigate SharePoint sites 聶 Create, manage, and use list items, documents, and forms 聶 Alert yourself to new or changed content 聶 Use views to work with content more efficiently 聶 Leverage SharePoint 2013's revamped search capabilities 聶 Organize content with lists, document libraries, and templates 聶 Use powerful social networking features, including tagging, NewsFeed updates, and microblogging 聶 Author and edit each type of SharePoint page 聶 Build flexible navigation hierarchies with Managed Metadata 聶 Systematically manage site security and content access 聶 Control permissions more effectively with the Permissions Page 聶 Create and track workflows, and integrate them with lists or libraries 聶 Customize a site's appearance, settings, and behavior 聶 Create new Office 365 private and public site collections
MySQL Workbench
Publisher's Note: Products purchased from Third Party sellers are not guaranteed by the publisher for quality, authenticity, or access to any online entitlements included with the product.The only Oracle Press guide to MySQL Workbench explains how to design and model MySQL databases.MySQL Workbench Data Modeling and Development helps developers learn how to effectively use this powerful product for database modeling, reverse engineering, and interaction with the database without writing SQL statements. MySQL Workbench is a graphical user interface that can be used to create and maintain MySQL databases without coding. The book covers the interface and explains how to accomplish each step by illustrating best practices visually. Clear examples, instructions, and explanations reveal, in a single volume, the art of database modeling. This Oracle Press guide shows you how to get the tool to do what you want. Annotated screen shots demonstrate all interactions with the tool, and text explains the how, what, and why of each step.Complete coverageInstallation and Configuration; Creating and Managing Connections; Data Modeling Concepts; Creating an ERD; Defining the Physical Schemata; Creating and Managing Tables; Creating and Managing Relationships; Creating and Managing Views; Creating and Managing Routines; Creating and Managing Routine Groups; Creating and Managing User & Groups; Creating and Managing SQL Scripts; Generating SQL Scripts; Forward Engineering a Data Model; Synchronize a Model with a Database; Reverse Engineering a Database; Managing Differences in the Data Catalog; Creating and Managing Model Notes; Editing Table Data; Editing Generated Scripts; Creating New Instances; Managing Import and Export; Managing Security; Managing Server Instances
Professional Microsoft IIS 8
Stellar author team of Microsoft MVPs helps developers and administrators get the most out of Windows IIS 8 If you're a developer or administrator, you'll want to get thoroughly up to speed on Microsoft's new IIS 8 platform with this complete, in-depth reference. Prepare yourself to administer IIS 8 in not only commercial websites and corporate intranets, but also the mass web hosting market with this expert content. The book covers common administrative tasks associated with monitoring and managing an IIS environment--and then moves well beyond, into extensibility, scripted admin, and other complex topics. The book highlights automated options outside the GUI, options that include the PowerShell provider and AppCmd tool. It explores extensibility options for developers, including ISAPI and HTTPModules. And, it delves into security protocols and high availability/load balancing at a level of detail that is not often found in IIS books. Author team includes Microsoft MVPs and an IIS team member Covers the management and monitoring of Microsoft Internet Information Services (IIS) 8 for administrators and developers, including MOF and MOM Delves into topics not often included in IIS books, including using the PowerShell provider and AppCmd tool and other automated options, and extending IIS 8 with ISAPI or HTTPModules Explores security issues in depth, including high availability/load balancing, and the Kerberos, NTLM, and PKI/SSL protocols Explains how to debug and troubleshoot IIS Professional Microsoft IIS 8 features a wealth of information gathered from individuals running major intranets and web hosting facilities today, making this an indispensible and real-world reference to keep on hand.
The Nimble Elephant
Leverage data model patterns during agile development to save time and build more robust applications."Get it done well and get it done fast" are twin, apparently opposing, demands. Data architects are increasingly expected to deliver quality data models in challenging timeframes, and agile developers are increasingly expected to ensure that their solutions can be easily integrated with the data assets of the overall organization. If you need to deliver quality solutions despite exacting schedules, "The Nimble Elephant" will help by describing proven techniques that leverage the libraries of published data model patterns to rapidly assemble extensible and robust designs. The three sections in the book provide guidelines for applying the lessons to your own situation, so that you can apply the techniques and patterns immediately to your current assignments.The first section, Foundations for Data Agility, addresses some perceived aspects of friction between "data" and "agile" practitioners. As a starting point for resolving the differences, pattern levels of granularity are classified, and their interdependencies exposed. A context of various types of models is established (e.g. conceptual / logical / physical, and industry / enterprise / project), and you will learn how to customize patterns within specific model types.The second section, Steps Towards Data Agility, shares guidelines on generalizing and specializing, with cautions on the dangers of going too far. Creativity in using patterns beyond their intended purpose is encouraged. The short-term "You Ain't Gonna Need It" (YAGNI) philosophy of agile practitioners, and the longer-term strategic perspectives of architects, are compared and evaluated. Consideration is given to the potential of enterprise views contributing to project-specific models. Other topics include industry models, iterative modeling, creation of patterns when none exist, and patterns for rules-in-data. The section ends with a perspective on the modeler's possible role in agile projects, followed by a case study.The final section, A Bridge to the Land of Object Orientation, provides a pathway for re-skilling traditional data modelers who want to expand their options by actively engaging with the ranks of object-oriented developers.
The Dama Dictionary of Data Management Enterprise Server Edition
Data Modeling Made Simple with PowerDesigner will provide the business or IT professional with a practical working knowledge of data modeling concepts and best practices, and how to apply these principles with PowerDesigner. You'll build many PowerDesigner data models along the way, increasing your skills first with the fundamentals and later with more advanced feature of PowerDesigner. This book combines real-world experience and best practices to help you master the following ten objectives: This book has ten key objectives for you, the reader: You will know when a data model is needed and which PowerDesigner models are the most appropriate for each situation You will be able to read a data model of any size and complexity with the same confidence as reading a book You will know when to apply and how to make use of all the key features of PowerDesigner You will be able to build, step-by-step in PowerDesigner, a pyramid of linked data models, including a conceptual data model, a fully normalized relational data model, a physical data model, and an easily navigable dimensional model You will be able to apply techniques such as indexing, transforms, and forward engineering to turn a logical data model into an efficient physical design You will improve data governance and modeling consistency within your organization by leveraging features such as PowerDesigner's reference models, Glossary, domains, and model comparison and model mapping techniques You will know how to utilize dependencies and traceability links to assess the impact of change You will know how to integrate your PowerDesigner models with externally-managed files, including the import and export of data using Excel and Requirements documents You will know where you can take advantage of the entire PowerDesigner model set, to increase the success rate of corporate-wide initiatives such as business intelligence and enterprise resource planning (ERP) You will understand the key differentiators between PowerDesigner and other data modeling tools you may have used before
Uml and Data Modeling
Here you will learn how to develop an attractive, easily readable, conceptual, business-oriented entity/relationship model, using a variation on the UML Class Model notation. This book has two audiences: Data modelers (both analysts and database designers) who are convinced that UML has nothing to do with them; and UML experts who don't realize that architectural data modeling really is different from object modeling (and that the differences are important). David Hay's objective is to finally bring these two groups together in peace. Here all modelers will receive guidance on how to produce a high quality (that is, readable) entity/relationship model to describe the data architecture of an organization. The notation involved happens to be the one for class models in the Unified Modeling Language, even though UML was originally developed to support object-oriented design. Designers have a different view of the world from those who develop business-oriented conceptual data models, which means that to use UML for architectural modeling requires some adjustments. These adjustments are described in this book. David Hay is the author of Enterprise Model Patterns: Describing the World, a comprehensive model of a generic enterprise. The diagrams were at various levels of abstraction, and they were all rendered in the slightly modified version of UML Class Diagrams presented here. This book is a handbook to describe how to build models such as these. By way of background, an appendix provides a history of the two groups, revealing the sources of their different attitudes towards the system development process.
Ios and Sensor Networks
Turn your iPhone or iPad into the hub of a distributed sensor network with the help of an Arduino microcontroller. With this concise guide, you'll learn how to connect an external sensor to an iOS device and have them talk to each other through Arduino. You'll also build an iOS application that will parse the sensor values it receives and plot the resulting measurements, all in real-time. iOS processes data from its own onboard sensors, and now you can extend its reach with this simple, low-cost project. If you're an Objective-C programmer who likes to experiment, this book explains the basics of Arduino and other hardware components you need--and lets you have fun in the process. Learn how to connect the Arduino platform to any iOS device Build a simple application to control your Arduino directly from an iPad Gather measurements from an ultrasonic range finder and display them on your iPhone Connect an iPhone, iPad, or iPod Touch to an XBee radio network Explore other methods for connecting external sensors to iOS, including Ethernet and the MIDI protocol
Data Modeling Made Simple With Ca Erwin Data Modeler R8
Data Modeling Made Simple with CA ERwin Data Modeler r8 will provide the business or IT professional with a practical working knowledge of data modeling concepts and best practices, and how to apply these principles with CA ERwin Data Modeler r8. You'll build many CA ERwin data models along the way, mastering first the fundamentals and later in the book the more advanced features of CA ERwin Data Modeler. This book combines real-world experience and best practices with down to earth advice, humor, and even cartoons to help you master the following ten objectives: Understand the basics of data modeling and relational theory, and how to apply these skills using CA ERwin Data Modeler Read a data model of any size and complexity with the same confidence as reading a book Understand the difference between conceptual, logical, and physical models, and how to effectively build these models using CA ERwin's Data Modelers Design Layer Architecture Apply techniques to turn a logical data model into an efficient physical design and vice-versa through forward and reverse engineering, for both 'top down' and bottom-up design Learn how to create reusable domains, naming standards, UDPs, and model templates in CA ERwin Data Modeler to reduce modeling time, improve data quality, and increase enterprise consistency Share data model information with various audiences using model formatting and layout techniques, reporting, and metadata exchange Use the new workspace customization features in CA ERwin Data Modeler r8 to create a workflow suited to your own individual needs Leverage the new Bulk Editing features in CA ERwin Data Modeler r8 for mass metadata updates, as well as import/export with Microsoft Excel Compare and merge model changes using CA ERwin Data Modelers Complete Compare features Optimize the organization and layout of your data models through the use of Subject Areas, Diagrams, Display Themes, and more Section I provides an overview of data modeling: what it is, and why it is needed. The basic features of CA ERwin Data Modeler are introduced with a simple, easy-to-follow example. Section II introduces the basic building blocks of a data model, including entities, relationships, keys, and more. How-to examples using CA ERwin Data Modeler are provided for each of these building blocks, as well as 'real world' scenarios for context. Section III covers the creation of reusable standards, and their importance in the organization. From standard data modeling constructs such as domains to CA ERwin-specific features such as UDPs, this section covers step-by-step examples of how to create these standards in CA ERwin Data Modeling, from creation, to template building, to sharing standards with end users through reporting and queries. Section IV discusses conceptual, logical, and physical data models, and provides a comprehensive case study using CA ERwin Data Modeler to show the interrelationships between these models using CA ERwin's Design Layer Architecture. Real world examples are provided from requirements gathering, to working with business sponsors, to the hands-on nitty-gritty details of building conceptual, logical, and physical data models with CA ERwin Data Modeler r8.
System Center Service Manager 2010
System Center Service Manager 2010 offers enterprises a complete, integrated platform for adopting and automating service management best practices, such as those found in ITIL and Microsoft Operations Framework (MOF). Now, there's a comprehensive, independent reference and technical guide to this powerful product. A team of expert authors offers step-by-step coverage of related topics in every feature area, organized to help IT professionals quickly plan, design, implement, and use Service Manager 2010. After introducing the product and its relationship with the rest of Microsoft's System Center suite, the authors present authoritative coverage of Service Manager's capabilities for incident and problem resolution, change control, configuration management, and compliance. Readers will also find expert guidance for integrating Service Manager with related Microsoft technologies. This book is an indispensable resource for every IT professional planning, installing, deploying, and/or administering Service Manager, including ITIL, MOF, and other IT consultants; system administrators; and developers creating customized solutions. - Understand Service Manager's architecture and components - Discover how Service Manager supports ITIL and MOF processes - Accurately scope and specify your implementation to reflect organizational needs - Plan to provide redundancy, ensure scalability, and support virtualization - Design, deploy, and maintain Service Manager with security in mind - Use Service Manager's consoles and portals to provide the right resources to each user - Create complete service maps with Service Manager's business services - Fully automate incident management and ticketing - Implement best processes for identifying and addressing root causes of problems - Systematically manage the life cycle of changes - Use Service Manager to strengthen governance, risk management, and compliance - Customize Service Manager's data layer, workflows, and presentation layer - Use management packs to simplify service desk customization - Make the most of Service Manager's reporting and dashboards
Knowledge Engineering
Knowledge engineering (KE) and data mining are areas of common interest to researchers in AI, pattern recognition, statistics, databases, knowledge acquisition, data visualization, high performance computing, and expert systems.This book is divided in to seven major parts. Part one has focused on document and multi-document reconstruction and summarization, Medical Imaging, Opinion Mining, PCA LDA, Cross co-relation and phase based matching. Whereas part two covers application areas of Data Mining like Data Cleaning, Weather forecasting and Web Mining. Part three covers HCI, ECG, Direct Manipulation Interface, Face Recognition in crowd, Gesture recognition for Mobile, Chaotic dynamics, epilepsy and Alzheimer's diagnosis, CAL, Devanagri character recognition and Speech Databases. Web Mining related areas like Clustering, Web usage Mining, Web log analysis, BI, Web indexing, Crawlers and Link Mining are covered in part four.The algorithms of Data Mining related to Decision Trees, Association Rules and Tries base Apriori algorithm, Decision support and GIS are covered in part five. The sixth part covers aspects of Security like density based approach, intrusion detection in Oracle, unbalanced datasets and dark block extraction. The last part contains the other allied areas of Data Mining for the applications like customer review, SOA-Governance planning, Mobile Ad-Hoc networks, KE Framework for technical education institutes, time series analysis, extraction of genetic features, KD in Agriculture crop production, Earthquake prediction and Credit Card fraud detection.
Data Resource Simplexity
Do you fully understand all the data in your organization's data resource? Can you readily find and easily access the data you need to support your business activities? If you find multiple sets of the same data, can you readily determine which is the most current and correct? No? Then consider this book essential reading. It will help you develop a high quality data resource that supports business needs.Data Resource Simplexity explains how a data resource goes disparate, how to stop that trend toward disparity, and how to develop a high quality, comparate data resource. It explains how to stop the costly business impacts of disparate data. It explains both the architectural and the cultural aspects of developing a comparate data resource. It explains how to manage data as a critical resource equivalent to the other critical resources of an organization--finances, human resource, and real property.Drawing from his nearly five decades of data management experience, plus his leveraging of theories, concepts, principles, and techniques from disciplines as diverse as human dynamics, mathematics, physics, agriculture, chemistry, and biology, Michael Brackett shows how you can transform your organization's data resource into a trusted invaluable companion for both business and data management professionals.Chapter 1 reviews the trend toward rampant data resource disparity that exists in most public and private sector organizations today--why the data resource becomes complex. Chapter 2 introduces the basic concepts of planned data resource comparity--how to make the data resource elegant and simple. Chapter 3 presents the concepts, principles, and techniques of a Common Data Architecture within which all data in the organization are understood and managed.Chapters 4 through 8 present the five architectural aspects of data resource management. Chapter 4 explains the development of formal data names. Chapter 5 explains the development of comprehensive data definitions. Chapter 6 explains the development of proper data structures. Chapter 7 explains the development of precise data integrity rules. Chapter 8 explains the management of robust data documentation.Chapters 9 through 13 present the five cultural aspects of data resource management. Chapter 9 explains the development of a reasonable orientation for the data resource. Chapter 10 explains acceptable data availability to the business. Chapter 11 explains adequate data responsibility for the data resource. Chapter 12 explains an expanded data vision for managing the data resource. Chapter 13 explains how to achieve appropriate data recognition.Chapter 14 presents a summary explaining that development of a comparate data resource is a cultural choice of the organization and the need for a formal data resource management profession.
Microsoft Lync Server 2010 Unleashed
This is the industry's most comprehensive, realistic, and useful guide to Microsoft Lync Server 2010. It brings together "in-the-trenches" guidance for all facets of planning, integration, deployment, and administration, from expert consultants who've spent years implementing Microsoft Unified Communications solutions. The authors first introduce Microsoft Lync Server 2010 and show how it represents a powerful leap beyond earlier unified communications platforms. They systematically cover every form of communication Lync Server can manage, including IP voice, instant messaging, audio/video conferencing, web conferencing, and more. You'll find expert guidance on planning infrastructure, managing day-to-day operations, enforcing security, troubleshooting problems, and many other crucial topics. Drawing on their extensive experience, the authors combine theory, step-by-step configuration instructions, and best practices from real enterprise environments. They identify common mistakes and present proven solutions and workarounds. Simply put, this book tells you what works-and shows you how to make it work. Plan and manage server roles, including Front End, Edge, Monitoring, Archiving, and Director roles Understand Lync Server integration with Active Directory, DNS, certificates, and SQL Server Manage Lync Server through the Lync Server management shell and Microsoft Systems Center Operations Manager Migrate smoothly from OCS 2007, 2007 R2, or Live Communications Server Utilize Lync Server's new enterprise voice and audio conferencing features Use Lync Server with your PBX, as a PBX replacement, or in your call center Integrate presence into SharePoint pages or Exchange/Outlook web applications Build custom solutions with the new Unified Communications Managed API Deploy new Lync Server client software, including Mac, mobile, and browser/Silverlight clients Integrate headsets, handsets, webcams, and conference room phones Use the new virtualization policy to simplify deployment
BizTalk Server 2002 Design and Implementation
* Author designed and implemented the first enterprise level BizTalk solution in the financial industry * Technical discussion is concise and to-the-point * Readers will learn about BizTalk Server 2002 through creating an actual BizTalk Server application, hands-on approach * Only book that provides complete and detailed coverage of BizTalk Server 2002
Enterprise Model Patterns
Here you'll find one key to the development of a successful information system: Clearly capture and communicate both the abstract and concrete building blocks of data that describe your organization. In 1995, David Hay published Data Model Patterns: Conventions of Thought - the groundbreaking book on how to use standard data models to describe the standard business situations. Enterprise Model Patterns: Describing the World builds on the concepts presented there, adds 15 years of practical experience, and presents a more comprehensive view.You will learn how to apply both the abstract and concrete elements of your enterprise's architectural data model through four levels of abstraction: Level 0: An abstract template that underlies the Level 1 model that follows, plus two meta models: Information Resources. In addition to books, articles, and e-mail notes, it also includes photographs, videos, and sound recordings. Accounting. Accounting is remarkable because it is itself a modeling language. It takes a very different approach than data modelers in that instead of using entities and entity classes that represent things in the world, it is concerned with accounts that represent bits of value to the organization. Level 1: An enterprise model that is generic enough to apply to any company or government agency, but concrete enough to be readily understood by all. It describes: People and Organization. Who is involved with the business? The people involved are not only the employees within the organization, but customers, agents, and others with whom the organization comes in contact. Organizations of interest include the enterprise itself and its own internal departments, as well as customers, competitors, government agencies, and the like. Geographic Locations. Where is business conducted? A geographic location may be either a geographic area (defined as any bounded area on the Earth), a geographic point (used to identify a particular location), or, if you are an oil company for example, a geographic solid (such as an oil reserve). Assets. What tangible items are used to carry out the business? These are any physical things that are manipulated, sometimes as products, but also as the means to producing products and services. Activities. How is the business carried out? This model not only covers services offered, but also projects and any other kinds of activities. In addition, the model describes the events that cause activities to happen. Time. All data is positioned in time, but some more than others. Level 2: A more detailed model describing specific functional areas: Facilities Human Resources Communications and Marketing Contracts Manufacturing The Laboratory Level 3: Examples of the details a model can have to address what is truly unique in a particular industry.
Oracle Streams 11 g
Publisher's Note: Products purchased from Third Party sellers are not guaranteed by the publisher for quality, authenticity, or access to any online entitlements included with the product.Master Oracle Streams 11g ReplicationEnable real-time information access and data sharing across your distributed framework using the expert information in this Oracle Press guide. Oracle Streams 11g Data Replication explains how to set up and administer a unified enterprise data sharing infrastructure. Learn how to capture, propagate, and apply database changes, transform data, and handle data conflicts. Monitoring, optimizing, and troubleshooting techniques are also covered in this comprehensive volume.Understand Oracle Streams components and architectureGain in-depth knowledge about capturing, propagating, and applying data manipulation language (DML) and data definition language (DDL) changesLearn how to access and modify the contents of Logical Change RecordsBuild custom procedures for data transformationsConfigure Oracle Streams replication for the database, schemas, and tablesTune Oracle Streams performance for improved throughputManage and monitor Oracle Streams using Oracle Enterprise Manager Grid ControlLearn from several practical examples and scripts
Star Schema
Publisher's Note: Products purchased from Third Party sellers are not guaranteed by the publisher for quality, authenticity, or access to any online entitlements included with the product.The definitive guide to dimensional design for your data warehouseLearn the best practices of dimensional design. Star Schema: The Complete Reference offers in-depth coverage of design principles and their underlying rationales. Organized around design concepts and illustrated with detailed examples, this is a step-by-step guidebook for beginners and a comprehensive resource for experts.This all-inclusive volume begins with dimensional design fundamentals and shows how they fit into diverse data warehouse architectures, including those of W.H. Inmon and Ralph Kimball. The book progresses through a series of advanced techniques that help you address real-world complexity, maximize performance, and adapt to the requirements of BI and ETL software products. You are furnished with design tasks and deliverables that can be incorporated into any project, regardless of architecture or methodology.Master the fundamentals of star schema design and slow change processingIdentify situations that call for multiple stars or cubesEnsure compatibility across subject areas as your data warehouse growsAccommodate repeating attributes, recursive hierarchies, and poor data qualitySupport conflicting requirements for historic dataHandle variation within a business process and correlation of disparate activitiesBoost performance using derived schemas and aggregatesLearn when it's appropriate to adjust designs for BI and ETL tools
Create Stunning HTML Email That Just Works
Create Stunning HTML Email That Just Works is a step-by-step guide to creating beautiful HTML emails that consistently work. It begins with an introduction to email covering topics such as: how email design differs from web design; permission based marketing, and the anatomy of an email. What You Will Learn: How to design HTML emails that look greatSimple methods to design & test email newslettersBest practice, permission based email marketing tips & techniquesProven strategies for selling email design services to your clientsThe book shows the reader how to plan, design, and build gorgeous HTML email designs that look great in every email program: Outlook, Gmail, Apple Mail, etc. All-important tasks like legal requirements, testing, spam compliance and known hacks and workarounds are covered.