Ever since the invention of “mail merge” in 1980, businesses have engineered new and better ways to personalize their documents, using multiple data sources and smart technology to produce custom catalogs and other materials. But getting it right is not always easy.

Like many other great ideas, the marriage of data and good design is not as easy as it sounds. The enormous demand for customization and personalization (in a rapidly changing, SaaS environment) is the ultimate stress test for any system. But it takes planning and hard work to perfect intelligent automation.

To direct where multichannel marketing is headed, we need to remember and understand where we came from and what we learned along the way.

The Evolution of Publishing

In the 1980s, people had this crazy idea that personal computers would make our lives easier. Surely, we thought, a computer could take the labor and cost out of producing large, data-intensive publications. Right? Raise your hand if you remember the cumbersome, costly, manual process of using cameras, scanners, litho film, stripping tables, and rubylith for publishing. While effective, these methods were expensive and tedious.

Shortly after, Scitex became the go-to digital system for higher end publications who had the budget to cover the $1 Million per seat price tag. By the end of the decade, personal computers were used to create page design software, imaging software, and PostScript that made obsolete the old ways of producing a printed page.

The Data Merge

While publishing technologies improved by leaps and bounds, the information and data side of things proved to be more complex. Early on, we figured out how to merge data files with documents, from physical letters (remember those?) to packaging and shipping labels. More complex documents, like directories, technical manuals, and phone books, were next, allowing large amounts of complex data to be poured into page layout templates.

There was still a missing element, however. Consumer catalogs and complex product directories needed both robust data handling AND good design. Inflexible, phone book-style layouts were not sufficient. Users wanted the visual appeal of something designed in QuarkXPress or Adobe InDesign AND the ability to include text, images, and pricing automatically from a database.

The Internet + Catalogs = A Perfect Storm

Consumer’s adoption of online retail platforms and apps created the ultimate test for publishers to create an ideal smart system where catalogs were as personalized as possible. Initially, digital tools began to make this process easier, by creating a pipeline of sorts. Complex data flowed from large, interconnected databases to pre-designated locations on the page. Of course, this opened the possibility for even more complex workflows, like customizing catalogs to different regions of the country or even personalizing each copy through the magic of digital printing.

Versioned, vertically-focused catalogs had to become even narrower—ultimately centered on the individual consumer.

There is a problem with this firehose approach: the data flow is usually only one-way. Extreme accuracy could be achieved but at the cost of flexibility. Last-minute changes in any aspect of a featured product, from special pricing to size or color inconsistencies to simple error correction, could not easily or cheaply be made at the operator level.

App/digital versions of your catalog needed to be even further customized to feature not only precise product information but also the products and pricing information specific to my region, my store, and my shopping preferences. Versioned, vertically-focused catalogs had to become even narrower—ultimately centered on the individual consumer.

The Perfect Storm Meets the Cloud

Retail catalogers and other database publishers were thrown an unexpected curve ball with the introduction of the cloud. The rise of Software as a Service (SaaS) meant that all those databases and their related applications were moved from on-site servers to the cloud. IT departments coordinated large uploads to get their their Product Information Management (PIM) System and Digital Asset Management (DAM) System to the cloud.

So Who Got it Right?

Let’s rewind a bit. In the 1990s, a startup developer based in Germany began to address this issue by connecting database records to QuarkXPress documents, using the latter’s XTension approach. This soon developed into a broader, system approach now known as LAGO.

Over time, it embraced several different types of databases, including Product Information Management (PIM), Digital Asset Management (DAM), marketing campaign management, pricing, and other key systems. Focusing primarily on Adobe InDesign and InDesign Server for actual page composition, LAGO achieved prominence in the data-driven catalog space.

What distinguished LAGO from other database publishing systems was a simple difference. It was not a one-way “firehose” of data into the digital publishing ecosystem. End users could identify needed changes or adjustments, make them with the needed approvals, and still meet their production deadlines. The changes were reflected in the respective databases, with an audit trail to assure data integrity.

The use of this unique, two-way approach has other advantages as well. The current “snapshot” of a particular print catalog or flyer can also be used to populate its online or mobile app counterpart. The added flexibility also allows each retailer or other business to create multiple, regional versions of each product.

Publishing Power Tools

As an added benefit, LAGO now has a cloud or SaaS implementation of its own, allowing it to function easily with its own or third-party PIM and DAM systems. Instead of being a massive, one-time integration and data migration project, LAGO is more like a suite of “power tools” for the ongoing needs of today’s database publisher.


Interested in how to reduce the effort and time spent on your catalog or flyer production? See how we can help!

Request a Demo