Categories Categories

Books

24x18x2, 2013, Shroff/Packt Publishing, 1, India, Paperback, New, Piero Giacomelli

The rise of the Internet and social networks has created a new demand for software that can analyze large datasets that can scale up to 10 billion rows. Apache Hadoop has been created to handle such heavy computational tasks. Mahout gained recognition for providing data mining classification algorithms that can be used with such kind of datasets.

In stock
+
Rs.799.00
23x18x3, 2009, Shroff/Packt Publishing, 1, India, Paperback, New, Brett Porter|| Maria Odea Ching

It's easy to quickly get to work with Maven and get the most out of its related tools when you follow the sequential coverage of the sample application in this book. A focus on team environments ensures that you will avoid the pitfalls that are all too common when working in a team. Soon, by learning the best practices of working with Maven, you will have built an effective, secure Java application.

In stock
+
Rs.700.00
23x18x2, 2010, Shroff/Packt Publishing, 1, India, Paperback, New, Srirangan

This well-detailed Cookbook takes you step by step, doing one task at a time with the latest version of Apache Maven 3. You will find this Cookbook an answer to almost all your needs for building high-quality Java applications with well-explained code and many illustrations to quicken up your learning. If you're a Java developer, it will arm you with all the critical information you need to get to grips with Maven 3, the latest version of the powerful build tool by Apache. This book is for Java developers, teams, and managers who want to implement Apache Maven in their development process, leveraging the software engineering best practices and agile team collaboration techniques it brings along. The book is also specifically for the developer who wishes to get started in Apache Maven and use it with a range of emergent and enterprise technologies including Enterprise Java, Frameworks, Google App Engine, Android, and Scala.

In stock
+
Rs.799.00
23x18x3, 2010, Shroff/Packt Publishing, 1, India, Paperback, New, Bart Kummel

Create appealing and easy-to-use templates with Facelets

  • Assure reusability of your code by constructing composition components
  • Build consistent looking and usable pages using Trinidad components
  • Extend your JSF standard by using Tomahawk components
  • Enhance your web application by enabling AJAX functionality in your JSF application without writing JavaScript code
  • Create dynamic applications that fit the corporate style and color scheme of your company by using the extensive skinning capabilities of Trinidad
  • Prevent the duplication of validation rules by adding EJB3 annotation-based validation with ExtVal
  • Optimize your JSF application in terms of performance and page size
In stock
+
Rs.1,199.00
23x18x2, 2009, Shroff/Packt Publishing, 1, India, Paperback, New, David Thomas

In today's world, JSF is one of the pivotal technologies for implementing middle- to large-scale web applications. With Trinidad, JSF developers have a powerful open source component framework at their fingertips. This book introduces Apache MyFaces Trinidad, a powerful JSF component framework and combines it with Seam, the next-generation Web Application Framework to achieve the most comprehensive and effective technology for the development of powerful rich-client web applications. In this book, you start out by learning where Trinidad comes from and what its aims are. You will learn how Facelets and Seam are used to get the most out of JSF. In addition, you will also learn the often occurring tag attributes, and, in particular, Trinidad's AJAX technology.

In stock
+
Rs.799.00
23x18x2, 2010, Shroff/Packt Publishing, 1, India, Paperback, New, Ruth Hoffman

Let this book be your guide to enhancing your OFBiz productivity by saving you valuable time. Written specifically to give clear and straightforward answers to the most commonly asked OFBiz questions, this compendium of OFBiz recipes will show you everything you need to know to get things done in OFBiz.

Whether you are new to OFBiz or an old pro, you are sure to find many useful hints and handy tips here. Topics range from getting started to configuration and system setup, security and database management through the final stages of developing and testing new OFBiz applications.

In stock
+
Rs.999.00
2015, Shroff/O'Reilly, No, First, New, India, Islam
Get a solid grounding in Apache Oozie, the workflow scheduler system for managing Hadoop jobs. With this hands-on guide, two experienced Hadoop practitioners walk you through the intricacies of this powerful and flexible platform, with numerous examples and real-world use cases.

Once you set up your Oozie server, you’ll dive into techniques for writing and coordinating workflows, and learn how to write complex data pipelines. Advanced topics show you how to handle shared libraries in Oozie, as well as how to implement and manage Oozie’s security capabilities.
  • Install and configure an Oozie server, and get an overview of basic concepts
  • Journey through the world of writing and configuring workflows
  • Learn how the Oozie coordinator schedules and executes workflows based on triggers
  • Understand how Oozie manages data dependencies
  • Use Oozie bundles to package several coordinator apps into a data pipeline
  • Learn about security features and shared library management
  • Implement custom extensions and write your own EL functions and actions
  • Debug workflows and manage Oozie’s operational details
About the Authors
Mohammad Kamrul Islam is currently working at Uber in data engineering team as a Staff Software Engineer. Previously, he worked at Linkedin for more than two years as Staff Software Engineer in the Hadoop development team. Before that, he worked at Yahoo for nearly five years as an Oozie architect/technical lead. His fingerprints can befound all over Oozie and is a respected voice in the Oozie community. He has been intimately involved with the Apache Hadoop ecosystem since 2009. Mohammad has a Ph.D. in Computer Science with a specialization in parallel job scheduling from Ohio State University. He received his MSCS degree from Wright State University, Ohio andBSCS from Bangladesh University of Engineering and Technology (BUET). He is a Project Management Committee (PMC) member of both Apache Oozie and Apache TEZ and frequently contributes to Apache YARN/MapReduce and Apache Hive. He was elected as the PMC chair and Vice-President of Oozie as part of the Apache Software Foundation from 2013 through 2015.

Aravind Srinivasan has been involved with Hadoop in general and Oozie in particular since 2008. He is currently a Lead Application Architect at Altiscale, a Hadoop-as-a-service company, where he helps customers with Hadoop application design and architecture. His association with Big Data and Hadoop started during his time at Yahoo, where he spent almost six years working on various data pipelines for advertising systems. He has extensive experience building complicated, low latency data pipelines and also in porting legacy pipelines to Oozie. He drove a lot of Oozie’s requirements as a customer in its early days of adoption inside Yahoo and later spent some time as a Product Manager in Yahoo’s Hadoop team where he contributed further to Oozie’s roadmap. He also spent a year after Yahoo at Think Big Analytics, a Hadoop consulting firm, where he got to consult on some interesting and challenging Big Data integration projects at Facebook. He has a Masters in Computer Science from Arizona State and lives in Silicon Valley.
In stock
+
Rs.725.00
23x18x3, 2009, Shroff/Packt Publishing, 1, India, Paperback, New, Alfonso Romero

This hands-on and practical book introduces you to Apache Roller. Starting off with the configuration and installation of your own blog, you'll then quickly learn how to add interesting content to your blog with the help of plenty of examples. You'll also learn how to change your blog's visual appearance with the help of Roller themes and templates and how to create a community of blogs for you and your colleagues or friends in your Apache Roller blog server. The book also looks at ways you can manage your community, and keep your site safe and secure, ensuring that it is a spam-free, enjoyable community for your users.

In stock
+
Rs.999.00
23x18x2, 2011, Shroff/Packt Publishing, 1, India, Paperback, New, David Smiley|| Eric Pugh

Through using a large set of metadata about artists, releases, and tracks courtesy of the MusicBrainz.org project, you will have a testing ground for Solr, and will learn how to import this data in various ways. You will then learn how to search this data in different ways, including Solr's rich query syntax and "boosting" match scores based on record data.


In stock
+
Rs.1,199.00
24x18x2, 2013, Shroff/Packt Publishing, 1, India, Paperback, New, Rafal Kuc
  • Efficient and configurable Apache Solr 4 setup
  • Index your data in different formats, forms, and sources
  • Implement different autocomplete functionality
  • Achieve near real time search with Apache Solr 4
  • Improve and benchmark Apache Solr for increased performance
  • Master SolrCloud functionality
  • Diagnose and resolve your problems with Apache Solr 4
  • Improve the relevance of your queries
  • Overcome common problems when analyzing your data
In stock
+
Rs.999.00
2013, Shroff/O'Reilly, 1, New, Paperback, Kathleen Ting, Jarek Jarcec Cecho
Integrating data from multiple sources is essential in the age of big data, but it can be a challenging and time-consuming task. This handy cookbook provides dozens of ready-to-use recipes for using Apache Sqoop, the command-line interface application that optimizes data transfers between relational databases and Hadoop.

Sqoop is both powerful and bewildering, but with this cookbook’s problem-solution-discussion format, you’ll quickly learn how to deploy and then apply Sqoop in your environment. The authors provide MySQL, Oracle, and PostgreSQL database examples on GitHub that you can easily adapt for SQL Server, Netezza, Teradata, or other relational systems.
  • Transfer data from a single database table into your Hadoop ecosystem
  • Keep table data and Hadoop in sync by importing data incrementally
  • Import data from more than one database table
  • Customize transferred data by calling various database functions
  • Export generated, processed, or backed-up data from Hadoop to your database
  • Run Sqoop within Oozie, Hadoop’s specialized workflow scheduler
  • Load data into Hadoop’s data warehouse (Hive) or database (HBase)
  • Handle installation, connection, and syntax issues common to specific database vendors
About the Authors
Kathleen Ting
is currently a Customer Operations Engineering Manager at Cloudera where she helps customers deploy and use the Hadoop ecosystem in production. She has spoken on Hadoop, ZooKeeper, and Sqoop at many Big Data conferences including Hadoop World, ApacheCon, and OSCON. She's contributed to several projects in the open source community and is a Committer and PMC Member on Sqoop.

Jarek Jarcec Cecho is currently a Software Engineer at Cloudera where he develops software to help customers better access and integrate with the Hadoop ecosystem. He has led the Sqoop community in the architecture of the next generation of Sqoop, known as Sqoop 2. He's contributed to several projects in the open source community and is a Committer and PMC Member on Sqoop, Flume, and MRUnit.
In stock
+
Rs.300.00
18x24x2, 2009, Shroff/Packt Publishing, 1, India, Paperback, New, Dave Newton

Dave Newton, a Struts PMC member, has been a professional developer for over twenty years, getting his start in Lisp and Smalltalk development, moving on to a lengthy stint in embedded system, game, and device driver development,

In stock
+
Rs.999.00