Spring data oreilly pdf

Date published 

 

Contribute to vaquarkhan/vaquarkhan development by creating an account on GitHub. This free book shows you how Spring's data access framework can help you Mark Pollack, Michael Hunger; Publisher: O'Reilly Media; (November, , est.) Data Access for Enterprise Java (Jon Brisbin, et al) · The Mirror Site (1) - PDF. How to configure and manage a MongoDb using Spring Data. . In this ebook, we provide a compilation of Spring Data examples that will help you.

Author:MABLE FUEMMELER
Language:English, Spanish, Indonesian
Country:Singapore
Genre:Biography
Pages:428
Published (Last):10.06.2016
ISBN:269-1-30550-970-2
Distribution:Free* [*Registration needed]
Uploaded by: MERLE

50244 downloads 121615 Views 16.35MB PDF Size Report


Spring Data Oreilly Pdf

Nutshell Handbook, the Nutshell Handbook logo, and the O'Reilly logo O'Reilly Media, Inc. Spring Data, the image of a giant squirrel, and. Spring Data. Modern Data Accessfor EnterpriseJava. Mark Pollack, Oliver Gierke, Thomas Risberg,. Jon Brisbin, and Michael Hunger. O'REILLY*. Beijing •. This hands-on introduction shows you how Spring Data makes it relatively easy to build applications across a wide range of new data access technologies such.

Overview of the New Data Access Landscape The data access landscape over the past seven or so years has changed dramatically. Relational databases, the heart of storing and processing data in the enterprise for over 30 years, are no longer the only game in town. The past seven years have seen the birth —and in some cases the death—of many alternative data stores that are being used in mission-critical enterprise applications. An example of a problem that pushes traditional relational databases to the breaking point is scale. How do you store hundreds or thousands of terabytes TB in a relational database? While data that is stored in relational databases is still crucial to the enterprise, these new types of data are not being stored in relational databases. While general consumer demands drive the need to store large amounts of media files, enterprises are finding it important to store and analyze many of these new sources of data. In the United States, companies in all sectors have at least TBs of stored data and many have more than 1 petabyte PB. To better understand their customers, companies can incorporate social media data into their decision-making processes. This has led to some interesting mainstream media 1. Big data generally refers to the process in which large quantities of data are stored, kept in raw form, and continually analyzed and combined with other data sources to provide a deeper understanding of a particular domain, be it commercial or scientific in nature.

In , IDC reported that the amount of information created and replicated will surpass 1. While data that is stored in relational databases is still crucial to the enterprise, these new types of data are not being stored in relational databases. While general consumer demands drive the need to store large amounts of media files, enterprises are finding it important to store and analyze many of these new sources of data.

In the United States, companies in all sectors have at least TBs of stored data and many have more than 1 petabyte PB. For example, companies can better understand the behavior of their products if the products themselves are sending phone home messages about their health. To better understand their customers, companies can incorporate social media data into their decision-making processes.

This has led to some interesting mainstream media 1. IDC; Extracting Value from Chaos IDC; US Bureau of Labor Statistics xv 18 reports for example, on why Orbitz shows more expensive hotel options to Mac users and how Target can predict when one of its customers will soon give birth, allowing the company to mail coupon books to the customer s home before public birth records are available.

Big data generally refers to the process in which large quantities of data are stored, kept in raw form, and continually analyzed and combined with other data sources to provide a deeper understanding of a particular domain, be it commercial or scientific in nature.

Many companies and scientific laboratories had been performing this process before the term big data came into fashion. What makes the current process different from before is that the value derived from the intelligence of data analytics is higher than the hardware costs. Aggregate data transfer rates for clusters of commodity hardware that use local disk are also significantly higher than SAN- or NAS-based systems times faster for similarly priced systems.

On the software side, the majority of the new data access technologies are open source. While open source does not mean zero cost, it certainly lowers the barrier for entry and overall cost of ownership versus the traditional commercial software offerings in this space.

Another problem area that new data stores have identified with relational databases is the relational data model. If you are interested in analyzing the social graph of millions of people, doesn t it sound quite natural to consider using a graph database so that the implementation more closely models the domain?

What if requirements are continually driving you to change your relational database management system RDBMS schema and object-relational mapping ORM layer? Perhaps a schema-less document database will reduce the object mapping complexity and provide a more easily evolvable system as compared to the more rigid relational model. While each of the new databases is unique in its own way, you can provide a rough taxonomy across most of them based on their data models. Graph Based on graph theory.

The data model has nodes and edges, each of which may have properties.

In retrospect, this name, while catchy, isn t very accurate because it seems to imply that you can t query the database, which isn t true. It reflects the basic shift away from the relational data model as well as a general shift away from ACID atomicity, consistency, isolation, durability characteristics of relational databases.

One of the driving factors for the shift away from ACID characteristics is the emergence of applications that place a higher priority on scaling writes and having a partially functioning system even when parts of the system have failed. While scaling reads in a relational database can be achieved through the use of in-memory caches that front the database, scaling writes is much harder. To put a label on it, these new applications favor a system that has so-called BASE semantics, where the acronym represents basically available, scalable, eventually consistent.

CECN CU Boulder OReilly

However, they offer similar features to NoSQL databases in terms of the scale of data they can handle as well as distributed computation features that colocate computing power and data. As you can see from this brief introduction to the new data access landscape, there is a revolution taking place, which for data geeks is quite exciting. Relational databases are not dead; they are still central to the operation of many enterprises and will remain so for quite some time.

The trends, though, are very clear: new data access technologies are solving problems that traditional relational databases can t, so we need to broaden our skill set as developers and have a foot in both camps.

In this book we aim to help developers get a handle on how to effectively develop Java applications across a wide range of these new technologies. The Spring Data project directly addresses these new technologies so that you can extend your existing knowledge of Spring to them, or perhaps learn more about Spring as a byproduct of using Spring Data. However, it doesn t leave the relational database behind.

Just Spring

How to Read This Book This book is intended to give you a hands-on introduction to the Spring Data project, whose core mission is to enable Java developers to use state-of-the-art data processing and manipulation tools but also use traditional databases in a state-of-the-art manner. We ll start by introducing you to the project, outlining the primary motivation of SpringSource and the team. We ll also describe the domain model of the sample projects that accommodate each of the later chapters, as well as how to access and set up the code Chapter 1.

Preface xvii 20 We ll then discuss the general concepts of Spring Data repositories, as they are a common theme across the various store-specific parts of the project Chapter 2. The same applies to Querydsl, which is discussed in general in Chapter 3. These two chapters provide a solid foundation to explore the store specific integration of the repository abstraction and advanced query functionality.

HBase, a column family database, is covered in a later chapter Chapter These chapters outline mapping domain classes onto the storespecific data structures, interacting easily with the store through the provided application programming interface API , and using the repository abstraction. Both projects build on the repository abstraction and allow you to easily export Spring Data managed entities to the Web, either as a representational state transfer REST web service or as backing to a Spring Roo built web application.

The book next takes a tour into the world of big data Hadoop and Spring for Apache Hadoop in particular. It will introduce you to using cases implemented with Hadoop and show how the Spring Data module eases working with Hadoop significantly Chapter This leads into a more complex example of building a big data pipeline using Spring Batch and Spring Integration projects that come nicely into play in big data processing scenarios Chapter 12 and Chapter The final chapter discusses the Spring Data support for Gemfire, a distributed data grid solution Chapter Conventions Used in This Book The following typographical conventions are used in this book: Italic Indicates new terms, URLs, addresses, filenames, and file extensions.

Constant width Used for program listings, as well as within paragraphs to refer to program elements such as variable or function names, databases, data types, environment variables, statements, and keywords. Constant width italic Shows text that should be replaced with user-supplied values or by values determined by context. This icon signifies a tip, suggestion, or general note. This icon indicates a warning or caution. Using Code Examples This book is here to help you get your job done.

In general, you may use the code in this book in your programs and documentation. You do not need to contact us for permission unless you re reproducing a significant portion of the code. For example, writing a program that uses several chunks of code from this book does not require permission. Answering a question by citing this book and quoting example code does not require permission.

Incorporating a significant amount of example code from this book into your product s documentation does require permission. We appreciate, but do not require, attribution. An attribution usually includes the title, author, publisher, and ISBN.

Brisbin, and Michael Hunger, If you feel your use of code examples falls outside fair use or the permission given above, feel free to contact us at The code samples are posted on GitHub. Safari Books Online Safari Books Online is an on-demand digital library that delivers expert content in both book and video form from the world s leading authors in technology and business.

Preface xix 22 Technology professionals, software developers, web designers, and business and creative professionals use Safari Books Online as their primary resource for research, problem solving, learning, and certification training. Safari Books Online offers a range of product mixes and pricing programs for organizations, government agencies, and individuals.

For more information about Safari Books Online, please visit us online. How to Contact Us Please address comments and questions concerning this book to the publisher: O Reilly Media, Inc Gravenstein Highway North Sebastopol, CA in the United States or Canada international or local fax We have a web page for this book, where we list errata, examples, and any additional information.

You can access this page at To comment or ask technical questions about this book, send to For more information about our books, courses, conferences, and news, see our website at Find us on Facebook: Follow us on Twitter: Watch us on YouTube: xx Preface 23 Acknowledgments We would like to thank Rod Johnson and Emil Eifrem for starting what was to become the Spring Data project.

A big thank you goes to David Turanski for pitching in and helping out with the GemFire chapter. Thank you to Richard McDougall for the big data statistics used in the introduction, and to Costin Leau for help with writing the Hadoop sample applications. We would also like to thank O Reilly Media, especially Meghan Blanchette for guiding us through the project, production editor Kristen Borg, and copyeditor Rachel Monaghan. Thank you to the community around the project for sending feedback and issues so that we could constantly improve.

Last but not least, thanks to our friends and families for their patience, understanding, and support. They were trying to integrate the Neo4j graph database with the Spring Framework and evaluated different approaches. The session created the foundation for what would eventually become the very first version of the Neo4j module of Spring Data, a new SpringSource project aimed at supporting the growing interest in NoSQL data stores, a trend that continues to this day.

Spring has provided sophisticated support for traditional data access technologies from day one. This support mainly consisted of simplified infrastructure setup and resource management as well as exception translation into Spring s DataAccessExceptions. This support has matured over the years and the latest Spring versions contained decent upgrades to this layer of support. The traditional data access support in Spring has targeted relational databases only, as they were the predominant tool of choice when it came to data persistence.

As NoSQL stores enter the stage to provide reasonable alternatives in the toolbox, there s room to fill in terms of developer support. Beyond that, there are yet more opportunities for improvement even for the traditional relational stores.

Ironically, it s the nonfeature the lack of support for running queries using SQL that actually named this group of databases. As these stores have quite different traits, their Java drivers have completely 3 28 different APIs to leverage the stores special traits and features. Trying to abstract away their differences would actually remove the benefits each NoSQL data store offers. A graph database should be chosen to store highly interconnected data. A document database should be used for tree and aggregate-like data structures.

This theme is clearly reflected in the specification later on. It defines concepts and APIs that are deeply connected to the world of relational persistence. How should one implement the transaction API for stores like MongoDB, which essentially do not provide transactional semantics spread across multidocument manipulations? On the other hand, all the special features NoSQL stores provide geospatial functionality, map-reduce operations, graph traversals would have to be implemented in a proprietary fashion anyway, as JPA simply does not provide abstractions for them.

So we would essentially end up in a worst-of-both-worlds scenario the parts that can be implemented behind JPA plus additional proprietary features to reenable storespecific features.

Still, we would like to see the programmer productivity and programming model consistency known from various Spring ecosystem projects to simplify working with NoSQL stores. This led the Spring Data team to declare the following mission statement: Spring Data provides a familiar and consistent Spring-based programming model for NoSQL and relational stores while retaining store-specific features and capabilities.

w_orebpdf | Anonymous Function | Method (Computer Programming)

So we decided to take a slightly different approach. Instead of trying to abstract all stores behind a single API, the Spring Data project provides a consistent programming model across the different store implementations using patterns and abstractions already known from within the Spring Framework.

This allows for a consistent experience when you re working with different stores. This support is mainly implemented as XML namespace and support classes for Spring JavaConfig and allows us to easily set up access to a Mongo database, an embedded Neo4j instance, and the like.

So, when working with the native Java drivers, you would usually have to write a significant amount of code to map data onto the domain objects of your application when reading, and vice versa on writing. Thus, a very core part of the Spring Data modules is a mapping and conversion API that allows obtaining metadata about domain classes to be persistent and enables the actual conversion of arbitrary domain objects into storespecific data types.

On top of that, we ll find opinionated APIs in the form of template pattern implementations already well known from Spring s JdbcTemplate, JmsTemplate, etc. Thus, there is a RedisTemplate, a MongoTemplate, and so on. As you probably already know, these templates offer helper methods that allow us to execute commonly needed operations like persisting an object with a single statement while automatically taking care of appropriate resource management and exception translation. Beyond that, they expose callback APIs that allow you to access the store-native APIs while still getting exceptions translated and resources managed properly.

These features already provide us with a toolbox to implement a data access layer like we re used to with traditional databases. The upcoming chapters will guide you through this functionality. To ease that process even more, Spring Data provides a repository abstraction on top of the template implementation that will reduce the effort to implement data access objects to a plain interface definition for the most common scenarios like performing standard CRUD operations as well as executing queries in case the store supports that.

This abstraction is actually the topmost layer and blends the APIs of the different stores as much as reasonably possible. Thus, the store-specific implementations of it share quite a lot of commonalities. This is why you ll find a dedicated chapter Chapter 2 introducing you to the basic programming model.

Now let s take a look at our sample code and the domain model that we will use to demonstrate the features of the particular store modules. General Themes 5 30 The Domain To illustrate how to work with the various Spring Data modules, we will be using a sample domain from the ecommerce sector see Figure As NoSQL data stores usually have a dedicated sweet spot of functionality and applicability, the individual chapters might tweak the actual implementation of the domain or even only partially implement it.

This is not to suggest that you have to model the domain in a certain way, but rather to emphasize which store might actually work better for a given application scenario. Figure The domain model At the core of our model, we have a customer who has basic data like a first name, a last name, an address, and a set of addresses in turn containing street, city, and country. We also have products that consist of a name, a description, a price, and arbitrary attributes.

These abstractions form the basis of a rudimentary CRM customer relationship management and inventory system. On top of that, we have orders a customer can place. An order contains the customer who placed it, shipping and billing addresses, the date the order was placed, an order status, and a set of line items. These line items in turn reference a particular product, the number of products to be ordered, and the price of the product.

It is a Maven project containing a module per chapter. So, if you have it already downloaded and installed have a look at Chapter 3 for details , you can choose the Import option of the File menu.

Select the Existing Maven Projects option from the dialog box, shown in Figure Importing Maven projects into Eclipse step 1 of 2 The Sample Code 7 32 In the next window, select the folder in which you ve just checked out the project using the Browse button.

After you ve done so, the pane right below should fill with the individual Maven modules listed and checked Figure It will also resolve the necessary dependencies and source folder according to the pom.

Importing Maven projects into Eclipse step 2 of 2 You should eventually end up with a Package or Project Explorer looking something like Figure The projects should compile fine and contain no red error markers. The projects using Querydsl see Chapter 5 for details might still carry a red error marker. This is due to the m2eclipse plug-in needing additional information about when to execute the Querydsl-related Maven plug-ins in the IDE build life cycle.

The integration for that can be installed from the m2e-querydsl extension update site; you ll find the most recent version of it at the project home page. Copy the link to the latest version listed there 0.

Installing the feature exposed through that update site, restarting Eclipse, and potentially updating the Maven project configuration rightclick on the project Maven Update Project should let you end up with all the projects without Eclipse error markers and building just fine.

Eclipse Project Explorer with import finished Figure Select the Open Project menu entry to show the dialog box see Figure The IDE opens the project and fetches needed dependencies. The project is then ready to be used. You will see the Project view and the Maven Projects view, as shown in Figure Compile the project as usual.

The Sample Code 9 34 Figure Just right-click on the module and choose Add Framework. In the resulting dialog box, check JavaEE Persistence support and select Hibernate as the persistence provider Figure Too much boilerplate code had to be written.

Domain classes were anemic and not designed in a real object-oriented or domain-driven manner. The goal of the repository abstraction of Spring Data is to reduce the effort required to implement data access layers for various persistence stores significantly. The following sections will introduce the core concepts and interfaces of Spring Data repositories. We will use the Spring Data JPA module as an example and discuss the basic concepts of the repository abstraction. For other stores, make sure you adapt the examples accordingly.

Quick Start Let s take the Customer domain class from our domain that will be persisted to an arbitrary store. The class might look something like Example Example The Spring Data repository approach allows you to get 13 38 rid of most of the implementation code and instead start with a plain interface definition for the entity s repository, as shown in Example Its main responsibility is to allow the Spring Data infrastructure to pick up all user-defined Spring Data repositories.

Beyond that, it captures the type of the domain class managed alongside the type of the ID of the entity, which will come in quite handy at a later stage. In our sample case, we will use JPA. We just need to configure the XML element s base-package attribute with our root package so that Spring Data will scan it for repository interfaces.

The annotation can also get a dedicated package configured to scan for interfaces. Without any further configuration given, it will simply inspect the package of the annotated class. For other stores, we simply use the corresponding namespace elements or annotations. The configuration snippet, shown in Example , will now cause the Spring Data repositories to be found, and Spring beans will be created that actually 14 Chapter 2: Repositories: Convenient Data Access Layers 39 consist of proxies that will implement the discovered interface.

Thus a client could now go ahead and get access to the bean by letting Spring simply autowire it. A typical requirement might be to retrieve a Customer by its address. To do so, we add the appropriate query method Example The infrastructure will inspect the methods declared inside the interface and try to determine a query to be executed on method invocation.

If you don t do anything more than declare the method, Spring Data will derive a query from its name. There are other options for query definition as well; you can read more about them in Defining Query Methods on page In Example , the query can be derived because we followed the naming convention of the domain object s properties. The part of the query method name actually refers to the Customer class s address property, and thus Spring Data will automatically derive select C from Customer c where c.

It will also check that you have valid property references inside your method declaration, and cause the container to fail to start on bootstrap time if it finds any errors. Clients can now simply execute the method, causing the given method parameters to be bound to the query derived from the method name and the query to be executed Example Quick Start 15 40 Example The method declaration was inspected by the infrastructure and parsed, and a store-specific query was derived eventually.

However, as the queries become more complex, the method names would just become awkwardly long. For more complex queries, the keywords supported by the method parser wouldn t even suffice. Thus, the individual store modules ship with annotation, demonstrated in Example , that takes a query string in the store-specific query language and potentially allows further tweaks regarding the query execution.

Thus, to back our existing method with a externalized named query, the key would have to be Customer. We will strip the prefixes findby, readby, and getby from the method and start parsing the rest of it. At a very basic level, you can define conditions on entity properties and concatenate them with And and Or. There are also some general things to notice. The expressions are usually property traversals combined with operators that can be concatenated.

However, do not include excessive or redundant figures; the text should provide a clear interpretation and justification of all figures. Project Topic and Approach Guidelines Here is some more info about selecting an appropriate project topic and guidelines for how to get started on doing the project.

The main point of the project is to build a connection between some specific cognitive neuroscience or other data, and a computational model thereof -- the goal is for you to learn first-hand how the models help you better understand some specific phenomenon of interest. Thus, you will need to find one or more scientific papers with relevant data to model -- this is a great place to start in thinking about your project topic.

Undergraduates can often start out by just doing some basic manipulations to an existing simulation model from the textbook projects -- for example if you're interested in amnesia, you can damage the hippocampus model in various ways.

You might also like: SAMPLE BIODATA FORM PDF

Drug effects can often be simulated by manipulating various parameters in an existing model, and observing the effects on overall behavior. Again, the key is to make a connection between some specific data and the model -- it is not critical to demonstrate your computer programming skills. However, if you are interested in a topic that is not well covered by the textbook projects, by all means go big and build a new model from scratch -- you will have plenty of help in the lab to do this.

Grad students are typically expected to produce a more sophisticated model, often of the empirical phenomena you are actually studying in your research. For the project proposal and topic emails, please specify a domain as specifically as possible, and do try to use google scholar to find some relevant articles so you can get a clearer idea.

If you have any idea about how to actually implement this in a model, please specify. It does not need to be very long -- just include as much specificity as you can. Class Participation Productive participation in class discussion is encouraged to help you get the most out of this course. You are expected to read the text chapters the week they are assigned and to come to class prepared to actively participate in discussion.

You can also communicate about any course-related topics as a group by emailing cogsim grey. Undergrads need not feel intimidated by the presence of graduate students in the class. More will be expected of the grads than the undergrads, especially when it comes to the final projects. Also, undergrads will be responsible for fewer of the homework questions.

Grading Policy Grades are not curved; they are based on percentages note: Canvas truncates points, so a

Similar files:


Copyright © 2019 aracer.mobi.
DMCA |Contact Us