Book image via Shutterstock
This chapter is an excerpt from the new book, SOA with REST: Principles, Patterns & Constraints for Building Enterprise Solutions with REST, authored by Thomas Erl, Benjamin Carlyle, Cesare Pautasso, Raj Balasubramanian, published by Pearson/Prentice Hall Professional, August 2012, ISBN 0137012519, as part of the Prentice Hall Service Technology Series from Thomas Erl. For more info please visit the publisher page: www.informit.com/title/0137012519
We’ll be posting more from this book soon – and keep an eye out for a chance to win a free copy, courtesy of Pearson and ITKnowledgeExchange.
Analysis and Service Modeling with REST
A fundamental characteristic of SOA projects is that they emphasize the need for working toward a strategic target state that the delivery of each service is intended to support. In order to realize this, some level of increased up-front analysis effort is generally necessary. Therefore, a primary way in which SOA project delivery methodologies differ is in how they position and prioritize analysis-related phases. This consideration is equally relevant to any service implementation medium. In fact, the actual medium and associated technology that will be used to physically design and develop services may not actually be known during initial analysis stages. It may be better to determine whether REST or another medium is the most suitable choice after having had the opportunity to model a number of planned service candidates and study their anticipated composition and interaction requirements.
There are two primary analysis phases in a typical SOA project:
- the analysis of individual services in relation to business process automation
- the collective analysis of a service inventory
The service-oriented analysis phase is dedicated to producing conceptual service definitions (service candidates) as part of the functional decomposition of business process logic. The service inventory analysis establishes a cycle whereby the service-oriented analysis process is carried out iteratively to whatever extent the project methodology will allow. Continued »
The following is an excerpt from the book Making Sense of NoSQL from Manning Publications.
By Dan McCreary and Ann Kelly
The four main patterns—key value store, graph store, Bigtable store, and document store—are the major architecture patterns associated with NoSQL. As in most things in life there are always variations on a theme. In this article, part of Making Sense of NoSQL, the authors discuss a representative sample of the types of pattern variations and how they can be combined to build NoSQL solutions in organizations.
We’re giving away a free copy of Making Sense of NoSQL to one lucky ITKE member. Share your data management story with us and we’ll pick the most compelling tale.
Variations of NoSQL Architectural Patterns
In this article, we will look at how each of the NoSQL patterns—key-value store, graph store, Bigtable store, and document store—can be varied by focusing on a different aspect of system implementation. We’ll look at how the architectures can be varied to use RAM or solid state drives (SSD) and then talk about how the patterns can be used on distributed systems or modified to create enhanced-availability. Finally, we’ll look at how database items can be grouped together in different ways to make navigation over many items easier.
Customization for Random Access Memory (RAM) or Solid State Drive (SSD) Stores
Some NoSQL products are designed to specifically work with one type of memory; for example, memcache, a key-value store, was specifically designed to see if items are in RAM on multiple servers. A key-value store that only uses RAM is called a RAM cache; it’s flexible and has general tools that application developers can use to store global variables, configuration files, or intermediate results of document transformations, A RAM cache is fast, reliable, and can be thought of as another programming construct like an array, a map, or a lookup system. There are, however, several items that should be considered:
- Simple RAM resident key-values stores are generally empty when the server starts up and can only be populated with values on demand.
- You need to define the rules about how memory is partitioned between the RAM cache and the rest of your application.
- RAM resident information must be saved to another storage system if you want it to persist between server restarts.
The key is to understand that RAM caches must be re-created from scratch each time a server restarts. A RAM cache that has no data in it is called a “cold-cache” and is a good reason why some systems get faster the more they are used after a reboot.
SSD systems provide permanent storage and are almost as fast as RAM for read operations. The Amazon DynamoDB key-value store service uses SSD for all its storage resulting is very high-performance read operations. Write operations to SSD can often be buffered in large RAM caches resulting in very fast write times until the RAM becomes full.
As you will see, using RAM and SSD drives efficiently is critical when using distributed systems that provide for higher volume and availability.
Member Batye agreed to review Microsoft SQL Server 2012 Pocket Consultant. Disclosure: The publisher of the book provided a free copy for this review.
This small pocket book is a great resource for IT admins that need to have a quick guide at their fingertips to conquer SQL Server 2012. It’s easy to read and simple to use, whether as a day-to-day manual or a reference guide to Microsoft SQL Server 2012.
I found it had a good balance of both step-by-step implementation and as well as sections with more expanded detail. The thing I really like about this book though was the size. It easily fits in your hand and is a perfect traveling companion. This book was also helpful figuring out a few things I didn’t see in the other reference guides for SQL Server 2012.
The material is covered in a detailed manner that remains pretty easy to digest and absorb on the go. The author, William R. Stanek, is a Microsoft technical guru and is the best of the best in computer writing for a reason.
Overall, he wrote a good book.
This is a good little book as a reference when dealing with the after effects and investigation of a security breach in your company’s network. I would recommend this book as a manual for a network security admin or a student in the field of the forensic IT.
The book provides a good overview of the security topic and is up to date. Keep in mind the book is written from the point of view of a Cisco employee; nevertheless, the author gives the reader a manual and practical guide on how to setup and run a security incident response team. The book gives true to life examples and training to help in a job in network security.
If you’d like to review a book for the Bookworm Blog, send me an email at Melanie at ITKnowledgeExchange.com to express your interest.
Check out Ed Tittel’s quickie book review of Windows Sysinternals Administrator’s Reference over at the Windows Enterprise Desktop blog. As someone who knows Mark Russinovich, the author and Technical Fellow at Microsoft, Tittel has some valuable insight into the book.
Interested in writing a review? Send me a little information about your IT background and your interest in reviewing IT books at Melanie@ITKnowledgeExchange.com.