Oracle Press are offering the Quick Start Guide to Oracle Query Tuning for free at the moment – register for the book at http://books.mcgraw-hill.com/ebookdownloads/solarwinds/
Having written several detailed reviews of Oracle Press’ Oracle Big Data Handbook (links below) I thought it useful to produce a summary. Over all is a very insightful and informative book covering the range of technologies that Oracle offers to address the ‘Big Data’ space from a number of view points such hardware with the Big Data Appliance (BDA), software with NoSQL, Enterprise R and Hadoop along with the various adapters (e.g. ODI) and existing product features that existing products make available to support the big data story and contribute to make a cohesive ecosystem. The book looks beyond the technologies classically linked to the ‘Big Data’ term to explore products such as Endeca. I like the act that the book tries to explain the rational behind some of the approaches adopted and the associated value propositions. Finally book looks at governance, maturity and architectural capabilities. All of which makes for an informative and insightful book.
The book isn’t flawless a few challenges that can make the reading a little frustrating occasionally (at least for me as I went cover to cover), for example,looking at the Big Data Appliance we seem to revisit the hardware specifications multiple times. The data governance perspective is data governance not specific to big data in my opinion. Occasionally the book seems to jump about when explaining a number of related areas which means that using the book as more a reference isn’t so easy. Don’t get me wrong these issues are hugely out weighed by the value it brings.
my detailed reviews:
This is the third and final part of the review of Oracle Press’ Oracle Big Data Handbook (and last part of our detailed review – previous parts can be seen Part 1 here and Part 2 here). With the first sections having introduced the Big Data Appliance and the case for adopting an appliance, followed by an in depth look at the technologies provided on the BDA for storing data we move into the section that really delivers the pay off, namely the mechanics of converting data to information i.e. Analytics. This means this section of the book concentrates on the likes of Oracle Data Mining (ODM), Oracle R Enterprise and Endeca. The first of the chapters in this part of the book looks at different types of analytics you might need to perform for example data mining, predictive analytics, text mining and so on, the result is that the chapter does seem to flip flop between more classic data warehousing (still Big Data in terms of shear data volumes) to the more contemporary hip and trendy of ‘Big Data’ in the form of Hadoop and R. This may work nicely for a DBA/Data Scientist, but as a technologist and enterprise architect it isn’t so easy as personally I’d prefer to get a sense of each product stack then look at how they compliment/overlap. That said, after the first couple of sections where both the tools and ideas are introduced the flip/flopping is quicker making it easier to cope with, but it also makes for a sizeable opening chapter for this section of the book. But let me show you the kinds in sights that can be gained from the book.
ODM extensions are built around the common Oracle toolkits of RDBMS, SQLDeveloper and additional packages to provide powerful visual paradigms and precanned analytics functionality. Not being a data warehouse expert, I like the fact that the book takes time to describe the processes for building a data model and predictive engine and the likely paths through these steps. The books goes onto to explaining the available Excel tooling. Most of this is helped along with the context of a scenario. Given the claim of realtime capability to take a transaction and use a predictive model against the transactions values to ascertain whether the transaction is likely to be indicative of the characteristics being sought after it would have been nice for the book to provide some outline benchmarks for the scenario. Realtime could be interpreted as a second or two. Which when you’re running millions of transactions with small profit margins per transaction means using such capabilities is a also expectation. Still this doesn’t take away from the clarity of the information that is explained.
From ODM the chapter moves onto introducing the R language. What really got my attention by the book is the apparent willingness to engage with an Open Source model (given the other major players in the evolution of R – Google, Facebook, LinkedIn etc you might argue there is no choice). But the book upfront addresses the fact that Oracle hasn’t (or not yet) incorporated an R editor into SQLDeveloper or JDeveloper and the book suggests a specific tool of RStudio. Then there is the engagement with a library of R extensions (CRAN – Comprehensive R Archive Network with over 5000 extensions).
All of which begs the question, what is Oracle’s value proposition in this pace. The book answers this be describing the challenges of using the Open Source edition of R (memory consumption and single threaded characteristics) and how they have addressed those by extending R into Oracle R Enterprise. In addition to these constraints Oracle’s extension recognises and works with the database governance and security layers properly. It is at this point we’re brought back to earlier focus of the BDA as the extensions allow the BDA Hadoop deployment to be used as a data source (along with Oracle RDBMS). In many respects it feels like a similar proposition to Revolution Analytics (other than the RDBMS emphasis being different). As with the Data Mining the example scenario is used to illustrate the applications of R in conjunction with Hadoop and Oracle RDBMS. To support the illustration the different additional libraries are explained (such as the Hadoop connector, RDBMS connector etc).
R enterprise doesn’t stop here, but has been integrated with PL/SQL, OBIEE and BI Publisher meaning that although some of the tools and the core solution are open source Oracle has achieved a rather rich ecosystem – a point not really called out by the book, but the presentation of the details really makes this jump out.
Still with Chapter 9 we move onto text mining for activities such as sentiment analysis and jump back to ODM with an explanation of the product’s capability in this space and the challenges that this kind of analysis presents. Which is followed by a view of the support R offers. The chapter moves onto things like Spatial analytics and so on. The later forms of analytics don’t confront the ideas of ‘Big Data’ based on the book’s opening definition of big data. That isn’t to say that a brief overview of how Oracle Spatial works and its capabilities to support ideas such as Location Intelligence isn’t interesting but I don’t see any differentiation between big data and normal patterns of use for Oracle Spatial. The examples provided such as knowing if there are patterns of location based usage, but such analysis can be done by ensuring a consistent representation of location from which you can select by a range – either a postal/zip code or by latitude and longitude for example, for which there are more cost effective tools and don’t necessitate pulling data out of a Hadoop cluster to perform such analysis. I would conceded that Oracle Spatial has an information rich data set that could be very effective, but to explore such ideas should we not be looking at ideas like that of ESRI’s integration to Hadoop (and more here) for example if Oracle offer such a capability.
Having crossed a range of technologies Chapter 10 briefly talks about IDEs, but then goes for a deeper dive into R covering the supported Open Source Edition and Enterprise Edition (ORE). The differences between the two versions and the licensing issues are well explained. Based on the description what Oracle have done to make the optimisation and ability to transparently leverage the database seems pretty impressive. The only thing to remember is by transparently moving R’s computational load into the DB is what impact on other processes. Oracle have also enabled ORE to access the predictave analytics capabilities that can reside within an Enterprise Database which are also illustrated here.
After looking at ORE’s capabilities the book moves onto its connection to Hadoop for R (Oracle R Connector for Hadoop – ORCH). ORCH provides the means to interact with HDFS along with the file system and RDBMS. The connector allows for the creating of MapReduce jobs using the R language and interacting the the job scheduler. To fully leverage these capabilities you do want to pull in CRAN libraries. The book then walks through a detailed example of using ORCH with MapReduce (including R script elements). This is then followed by a similar set of examples demonstrating direct interaction with HDFS.
Chapter 11, gives us a focus change again, this time to Endeca for Information Discovery. The book takes us through the history of Endeca and Oracle explaining the component naming – before and after the acquisition and the two dimensions of the Endeca product stack – eCommerce specific and for more general BI.
The chapter looks at the Endeca data model as it is a faceted or tagged model (i.e. all values are represented as label & value). The book emphasis’ the benefits of this model – but not downsides (needing to use the label to determine semantic meaning can have performance implications). This is important is it has implications of the flexibility to enrich data that Endeca can then leverage. Once the basic product and technicalities are examined the book actually steps back to explain the differences between BI and information Discovery and therefore the approaches to using these tools. Then onto to the tooling such as the studio, engine and integration capabilities. The book continues to build on the technical side with the classic NFRs of how to make the technology scale. We then flip back to look at a number of example use cases. Before a final jump to look at mechanics of deploying Endeca and getting some development work underway. The sequencing of the chapter sections does seem a little odd, but it does work, but trying to dive into just the technical dimensions alone probably not a practical proposition here.
Big Data governance is taken on the final chapter of the book – Chapter 12. The emphasis here is to look at the definition of Data Management (e.g. definition by Data Management Association – DAMA) and how Big Data relates to this. So the chapter walks through the key data governance factors – many of which are characterised in the diagram above for example focusing on common legislative considerations such as HIPAA, and Patriot’s Act KYC (Know Your Customer) through to EU Data Protection and UK Financial Services Act. Having a breadth wise view of Data Governance then the book starts to look at how Big Data scenarios differ from raw data and day to day data sets. The problem I have with the chapter here, that all the points being made are valid, but they’re not Big Data issues they are any data governance issues. What Big Data does is introduce technology to capture and use data in ways previously not considered so using the technologies in this way may impact declarations that you may have made to data protection registrar e.g. declare you keep customer data to enable order fulfillment but then use the data to determine effectiveness of sales channels would be an issue. But you dont have to have big data technology to create such an issue (the book itself acknowledges that you could do the analysis with older approaches but the difference is it is easier and quicker now). Having described some ‘any data’ guidance for your big data scenarios the book goes into a raft of big data scenarios in different domains and references some of the relevant legislation. If you read the chapter as just data governance it is a good reminder of the different considerations in data governance.
The final Chapter takes on architectural and road mapping considerations. A good way to conclude as this sort of thing will draw on all the preceeding chapters’ points; and this precisely how the chapter starts recapping the value proposition described up front followed the infrastructure considerations – data volumes need to process in parrallel to handle the volumes in timeframes that mean the insights can be assessed and reacted to in a meaningful manner. After the recap the book moves into a maturity model although the origins of the model aren’t clear (I’d have thought that the basis of what is presented is routed in a wider model). This naturally leads into looking at the Oracle Architecture Design Process (OADP). The details of OADP walks thought the goals and mechanics of developing your ‘As-Is’ and ‘To-Be’ architecture so you develop a transitional roadmap. The final step is obviously enabling the journey by developing the human skills necessary to perform such as journey.
So chapter 3 provides a brilliant overview of Hadoop and the echo system that has been developed around it. Addressing the divergent versions of Map Reduce leading to the likes of YARN. Touching on how commericalised versions of Hadoop have been taken forward with this (such as Cloudera).
Moving onto to describe the core solution components such as Node Managers and the relationship to hardware and the use of more commodity kit rather than using nice expensive SAN technology.
So now we have good (pretty much uncoloured by Oracle) view of Hadoop. Which leads into the the next chapter (chapter 4) which looks at why Oracle have taken the approach of an Appliance (which could be seen as contrary to the previous stated adoption of commodity kit).
So as you can see Oracle woven together a set of technologies into an Exadata based platform which would not only deal with Big Data Analytics but ideally support other volume scenario needs so you’re not adding another data silo. all of which fits with Oracle’s Engineered Solutions view point. The book takes on a explains the other factors involved in the BDA design – those of commercial considerations and value propositions in relation to its customer base – very refreshing to see (rather than rationalisation through technical arguments alone).
The book addresses the challenge of why should I go to Oracle for big data? Which is well argued on the experience of very large relational deployments. Oracle’s contributions to Hadoop via Cloudera and so on. The chapter finishes with the argument around cost comparison between buying a comparable hardware solution to build your own cluster. Taking just list prices compared to HP and the hardware costs come in more or less the same, that’s before you account for the fact the Oracle price includes all the software.
Chapter 5 addresses the deployment of the BDA, explaining the configuration process, which with the combination of a tool called Mammoth (appropriate really) and the lies of Puppet seems pretty simple as a lot of the solution is preconfigured on the box ready. all of which is reasonably well explained. my only grumble is that we do seem to revisit the details of the hardware fairly regularly as the details are again presented here, although we go into a deeper dive in the configuration. One surprise that I’d not picked up on is that Oracle have made their NoSQL solution available as open source, although a little digging might contribute to why as it has links back to Sleepycat’s BerkeleyDB that Oracle acquired (more here). As the chapter move through the physical aspects of the deployment it also highlights in clear terms any constraints Oracle imposes to ensure that the whole appliance is supportable, the most significant of these areas is the advanced networking that is setup.
Chapter 5 as it moves through deployment considerations addresses the means to know that the appliance is running properly – so we’re talking about system monitoring not just of the hardware but the distributed nature of Hadoop and Map Reduce. So a brief view of the products deployed is given. Obviously this centres on the Enterprise Manager extensions, but also the component level tooling such as Cloudera’s Hadoop Manager.
Chapter 6 in many respects continues building out the view of Hadoop to describe briefly the analytics tooling both in the Oracle RDBMS, R language and data mining/discovery of Endeca. The interesting points in the chapter are about the relationship with RDBMS particularly as an enterprise data warehouse – something I’ve not seen really addressed elsewhere as the common world view seems to put Hadoop in the same camp as NoSQL which seems to be gaining the zeal and polarity that Linux vs Windows used to have when it comes to RDBMS. But I think the book makes a good case for right tool for the right job.
Chapter 7 starts to drill in to how the connector package offers which consider Oracle database data transfer, combining the R language with MapReduce and ODI.
The database connector aim to provide efficiency in transferring data between Hadoop and the Oracle RDBMS over say using Sqoop to transfer data to and from an Oracle database (ODI connectors, JDBC, direct OCI etc). To fully understand the explanation of how this works you do need to understand the basics of MapReduce although as the chapter progresses the relevant MapReduce operations are elaborated upon. As the chapter progresses we start being shown configuration fragments for the different connection approaches.
The final chapter of this section of the book looks at the NoSQL database in detail, starting with high level ideas such as how NoSQL relates to ACID and BASE ideas, dropping down into significant (but valuable) detail by describing how clients are kept in sync through the use of separate threads picking up data about the data partitioning (sharding). Once the key components have been well described the chapter moves onto explain how Oracle has optimized the process to make the NoSQL as performant as is possible whilst providing a solution that is elastic in nature and highly resilient but still predictable in its dynamics.
The chapter finishes off with considerations such as installation, how it integrates with Hadoop and OBIEE.
Overall, this a very informative chapter, occasionally it feels like some of the information is being repeated but in a different structure but it isn’t the end of the world, although if you’re reading from cover to cover you need to just press on.
So having written a series of detailed blog entries reviewing a couple of chapters at at time I thought it might be worth just providing a very brief review. Writing a book that provides both breadth of coverage for a very large subject area as well as meaningful depth is a very difficult trick to pull off. But the authors of this book have succeeded magnificently. The book tackles the subject of basic customization that users can perform through to in-depth feature development using the Oracle SOA stack. Not to mention reporting and analytics. The book has been written in an engaging way providing context, background and Fusion Application principles and then taking examples of how to implement the different kinds of capabilities. From this book, you should have a good grasp of what to expect and how to approach Fusion application Extension work.
As a result I’d recommend this book to Architects, Project Manager’s who want to understand what their development team should be doing and the risks of their approach. This would also form a good roadmap into the detail for developers starting out in the Fusion applications space.
Detailed reviews can be seen at:
Our final detailed visit to Oracle Fusion Applications Development and Extensibility Handbook (Oracle Press) covers the final 3 chapters which engage with the Scheduler, Look and Feel customisation and the relationship with integration and service concepts (dare I use the acronym SOA).
The chapter on the scheduler is pretty short, but then compared to many other chapters the size of the product/component is small. The book relates how the scheduler behaves compared to the Schedule Management offered in EBusiness. The surprising things is that each product domain (Financials, HCM, CRM etc) has its own scheduler rather than a single shared service; the book doesn’t attempt to explain the rational here which is a shame. It does describe how it deploys into each domain, where the configuration exists and how to work with the configuration of the scheduler itself (e.g. where logging goes etc) and attempts to address some obvious questions from a administration perspective. It then goes onto how to create a custom scheduled process with a worked illustration. All very well done, although I have to admit to a nagging feeling of I’m missing something – it maybe simply that deployment is very much through server administration rather than through an automated mechanism (so if you develop and test in a preproduction environment, you can package up the process of deploying config custom app to your production environment without needing to repeat the admin UI interactions, so you can be assured there is no inconsistency between deployment instances).
The Look and Feel chapter is about largely applying the changes so that the product feels like part of your business’ corporate solution – important if you’re exposing any aspects of it to the outside world. So aside from the use of the tools you have the ADF controls to effectively ‘skin’ the product. The chapter provides a brief but concise view of how skinning works, in relation to the old EBusiness technologies (CLAF and UIX) and current HTML technology of CSS and the key part of ADF (Rich Faces). More importantly it points out the relevant documentation on all the sources of information, and tooling such as the skinning editor. Not to mention addressing the issue of deployment. Obviously there is a short illustration demonstrating an element of skinning.
The initial emphasis on the last chapter is the reality that organisations can’t simply migrate all non Fusion Apps such as EBusines, Seibel etc to the Fusion solutions in one hit therefore you need to provide a degree of integration between solutions for as long as the transition may take. This neatly leads into the question of well how do I know what components exist to support integration, which brings OER (Oracle Enterprise Repository) into the picture. So obviously the book provides a brief overview to the use of OER. The various Fusion apps offer different interfaces for different tasks (from bulk data export to business events) so each of these ‘patterns’ are briefly explianed and as Fusion apps is offered as a SaaS solution how that might impact the ‘pattern’ availability. The chapter finishes by walking through the use of using a SCA Composite and web services to interact with a Fusion App – probably one of the most common approaches to integration at a transactional (rather than bulk) manner. The only thing missing for me would be a brief discussion on Process Integration Packs (PIPs) which leverage all of the technologies underpinning Fusion Apps into a custom package of integration operations or ready made integrations.
So the final chapters provide a strong close to the book continuing to offer an excellent overview, pointing you to resources to ‘deep dive’ as necessary.
Previous Chapter reviews:
We continue on in our review of Oracle Fusion Applications Development and Extensibility Handbook (Oracle Press) to chapters 11 and 12 which look at Reporting and Analytics respectively.
Reporting in Fusion Apps is based upon OBIEE rather than vanilla BI Publisher against the application database. This means that you and build your reporting capability against a far more diverse set of data sources (license permitting of course). It does also mean that the steps for creating reports at least to start with are more complex as OBIEE realizes a multi-tier approach to report generation. The chapter goes onto to describe the types of data source, the means by which reports can be configured conditional execution and then through ideas such as ‘bursting’ where the report generating process can be partitioned and run in parallel by multiple processes each concentrate on a range of data (sound a little like Map Reduce doesn’t it). Finally how to format the output. All of which is then supported with a detailed illustration. As you might imagine there are prepackaged reports and templates, so loading and configuring these in an environment is considered.
The book recognises that in a single chapter you can only really scratch the surface of reporting and makes reference to other tools in the OBIEE kit bag such as OTBI (Oracle Transactional Business Intelligence) BI and Mobile BI composer. The only little trick here is the opportunity to point out some good sources of information. But that isn’t a significant, there is such a thing as Google and it might take a bit more reading to find the best resources around these tools.
Chapter 12 looks briefly at the use of Analytics through OBIA (Oracle Business Intelligence Applications), Oracle Hyperion (also known as Essbase) that is available with Financial Reporting studio and focuses on OBTI. The chapter feels pretty standalone from the preceding chapter on reporting – which when using the book more as a reference is great, but from a cover to cover read can niggle a little, particularly when both chapters rely on OBIEE background. But to be honest we are nit picking here. As with previous chapters there is an illustrated scenario walked through (the layout of which isn’t as good as previous chapters – but it is a relative observation), the illustration perhaps misses the opportunity for a killer blow of referencing the core app customisation to show how you might bind the dynamic reporting provided by OBTI view into the core CRM with the customisation. I have to say I am impressed by the OBTI technologies given the integration into the Fusion security framework, leveraging ADF and its optimisation strategies – all of which are clearly explained here.
It would have been nice to explore OBIA and Oracle Hyperion a bit further, but doing so would probably have warranted additional chapters. Overall a good chapter again, covering a lot of capability efficiently.
Previous Chapter reviews:
Back to the the review of Oracle Fusion Applications Development and Extensibility Handbook (Oracle Press), Chapters 9 & 10 take us from developing ADF based extensions to BPM and developing capabilities using a lot more of the SOA based building blocks such as Human Workflow.
The BPM chapter isn’t huge as actually the real effort behind BPM driven processes are more SOA based development. But the book does step back to explain Oracle’s history in the BPM and BPMN space and how Fusion Apps work using these technologies. So what we have is a good chapter more focusing on ideas and principles.
Chapter 10 naturally takes us into building full extensions which could be implementing the activities needed to realise a BPMN processes. The chapter is almost two separate halves, the first being the ideas and approaches adopted by Fusion Apps – such as the triggering of processes through EDN and onto into approval framework and how it compared to the preFusion products. The second half of the chapter turns all of this on practical steps in the various tools to realize functional extensions in a series of comprehensive steps.
Finally the chapter tackles the issues of deploying the customisation and the implications to patching and updating your Fusion Apps.
So yet again the authors have managed to cover a lot of ideas very effectively providing sufficient insight that you should able to find the necessary information if you’re working with a Fusion application not discussed here.
Previous Chapter reviews:
As a result of my involvement with the UK Oracle User Group I have been given the opportunity to review Oracle Press’ Oracle Big Data Handbook. I have to admit that I am not a Big Data expert (and reviewing this book was an opportunity to build my knowledge a bit more).
So, Chapter 1 starts providing a brief but succinct history of Big Data (from Google’s work with Map Reduce and lesser known technologies such as Swazall and Dremel), the rise of Hadoop. The primary value proposition of Big Data is briefly explored (highlighting the point that actually RDBMS such as Oracle can accommodate lots of data when in a structured form) but Big Data is the nexus of volume, speed, variety (multiple structures, semi structured and unstructured). The book does suggest that in addition to these factors the data Value (a structured transaction have a lot more value than the same quantity of unstructured data which delivers its value when in context with other data).
From here, a brief look at the Oracle BigData landscape which leads nicely to having a layout for the chapters of the book. Ranging from the Oracle Engineered Systems idea to it’s adoption Hadoop through Cloudera, NoData and onto how this becomes a joined up solution with the likes of OBIEE. Passing through Oracle’s extended version of the R language.
In all a brief, succinct and informative intro.
Chapter 2, takes us on the journey of the business value of Big Data ideas, taking us through some examples such as MCI’s campaign the 1990s to develop insight by mining for friends and family information. In its day we called this sort of thing data mining, now its another aspect of big data. The chapter moves onto describing an idea of Information Chain Reaction (ICR) – where output from one stage produces a response in the next. With communication, change and connection being the primary triggers.
The authors make an interesting point, in the book about taking the metrics for volumes of traffic on social sites with a pinch of salt, not because of the possibility of overstatement (although that is a possibility, after all users is an easy measure for investors) but how and when the measurement is done, and even just changes in API or user process. For example adopting an approach that drives users to just reverify their details regularly could create more user activity although deliver no more real information. Most importantly what is the value of the information/traffic to you.
I also love the fact that the book uses quotes from famous individuals to emphasis points, for example:
The temptation to form premature theories upon insufficient data is the bane of our profession.
– Sherlock Holmes
continuing with the review of Oracle Fusion Applications Development and Extensibility Handbook (Oracle Press), Chapters 7 & 8 get into the development side of building extensions through the use of JDeveloper and the ADF framework, although this approach is not recommended for CRM if it can be helped, bu then the Page Composer is far more powerful in the CRM context.
Chapter 7 walks you quickly through the process of establishing JDeveloper so that you can get underway with the customisation. Along the way the book references the very detailed Oracle guides and shares useful tips as well (for example how to share configuration between JDeveloper instances for connecting to a Fusion apps server without having to go through reconfiguration.
As Fusion Apps uses ADF for its framework, knowledge of this is going to help you understand more easily what is going as the book is not an ADF guide and focuses upon the use of the framework providing some honest hints and observations (e.g. it is necessary to know which task flow forms the basis of any page depending upon the product the identification of this information can be easy or difficult depending on the product). The bulk of chapter 7 is focused to guiding you through 2 scenarios for customisation.
By the end of chapter 7, although a lot of information has been shared I’d have liked to have seen a couple of things addressed, how to minimise the risk/impact of customisation so that deploying a patch doesn’t clash or has minimal impact with any customisation. It is also too easy for organisations to customise a product to the point the C in COTs far out weighs the O and T. Remember CEMLI? The second aspect I’d hoped to have seen is the incorporation of configuration control of the development changes – but this probably more one of my pet issues showing.
Chapter 8 goes into the mechanics of developing your own UI within an Fusion App, covering DB table creation, business components, UI and so on including the security framework, creation of workflow elements and so on. I have to admit that I found this chapter easier, than the pure customisation work of chapter 7 – although that could be because the whole mechanism is a bit more discrete.
Neither chapter really take on the question of testing (integration or unit level) – I’m sure that given all the good guidance here, that the authors have a few good practises and tricks that they could share on how to make testing as simple as possible.
Aside from a couple of small points, all said and done, the book does a tremendous job of addressing an enormous subject area, and recognises that it isn’t giving you every little detail by telling you which sections of the Fusion Developers guide will provide more detailed information. Bottom line, what the book doesn’t explain you have the insight into the official Oracle online docs to go find the rest of the information (without having to plough through a 1000+ pages of developer guide).
See earlier chapter reviews at: