Feed aggregator

Shaun M. Thomas: PG Phriday: The Audacity of NoSQL

Planet PostgreSQL - 22 July, 2016 - 20:55

The pure, unadulterated, presumptuous impudence of NoSQL. Engines like MongoDB recklessly discard concepts like ACID in some futile quest to achieve “web scale”, and end up accomplishing neither. RDBMS systems have literally decades of history to draw upon, and have long since conquered the pitfalls NoSQL platforms are just now encountering. There may be something to a couple of them, but by and large, they’re nothing we really need.

At least, that’s something I might have said a couple of weeks ago.

Now, I’ve never really ascribed to the supposition that NoSQL is the strict antithesis of traditional SQL-driven systems. The intro paragraph is something of a hyperbolic exaggeration of the general disdain that seems to permeate the NoSQL vs SQL debate. Most DBAs I’ve met basically say “meh” and move on with their day, optimizing queries, tweaking storage metrics, and what not. Usually NoSQL is on our RADAR, but we have other stuff to worry about.

Last week, I was forced to reconsider due to an existing internal project that needed my input. That analysis really showed me why devs and data guys from all walks of life are seduced to the Dark Side of data. After another week slogging through documentation, experimenting with self-healing replication sets, self-balancing sharding, and taking copious notes on everything, something finally clicked. The Dark Side, it seems, has some merit beyond simplifying data structure.

So what can Postgres potentially learn from its eternal adversaries? Beyond stealing their encoding mechanisms via JSON and JSONB datatypes, that is. Though my analysis thus far has focused mostly on MongoDB, it alone incorporates several fundamental concepts that I’ve always felt Postgres lacked. For now, I’ll focus on the two that have—in my opinion, anyway—really prevented Postgres from reaching its full potential.

Break Me

Postgres has replication, but no meaningful application beyond having a hot standby or a few read slaves. Why no master election consensus system like RAFT? Last year, I came across an excellent article that discussed implementing it with HAProxy, Governor, and etcd instead of the more historic (and infinitely more complicated) Pacemaker stack. Someone even forked Governer into Patroni to incorporate alternative consensus approaches. Still, that’s a lot of external dependencies.

Postgres has all of the parts to handle this itself, it just doesn’t. It has the replication stream. It knows among all replica nodes which has the highest transaction id. It has a process for cloning an existing instance in pg_basebackup. If an old primary node needs to become a replica, it can either do it directly, or use pg_rewind if necessary. Yet these fragments must be combined manually. Some use scripts, others use Pacemaker or some variant of the Governer approach above, but it’s much easier for a new project to simply chose another platform.

The specific issue at hand, is that Postgres elects to consider itself an autonomous process. In this view, a database is a self-contained data management structure, where everything goes into a single extremely high integrity bucket. As a consequence, Postgres has no concept of a cluster at all, except perhaps a slight awareness of currently streaming replicas.

MongoDB takes itself much less seriously. In the MongoDB world, a mongod instance is a bucket that will accept any and all content, and do a passable job of protecting it. As such, it’s much less concerned with spillage, giving it greater leverage for passing that bucket around. The authors of MongoDB clearly wanted to focus on data availability rather than integrity. To get that, they settled on a consensus system that some might argue is inadequate when combined with their data writing strategies.

Despite that, it works. MongoDB is a concept beyond mere data storage. Where Postgres is becoming more of a storage middleware through additions like extensions and foreign data wrappers, MongoDB is “where your data goes.” That’s a very alluring attitude, and a subtle shift in focus few (if any) mainstream RDBMS have adopted. First and foremost, engines like MongoDB are a cluster, incorporating multiple nodes that interchangeably share roles but present a united face to the world. Postgres has no equivalent.

But it could! Imagine a universe where we have the integrity guarantee of an ACID database, with a Mongo-like front-end that manages the nodes such that our bulletproof data is always available. Speaking of front ends…

Into Pieces

Shards. I have 50TB of data, and I have no Earthly idea what to do with it. I’d love to blow it up into a trillion little bits, but Postgres gives me no easy way to do that. Here’s the second place MongoDB trumps Postgres in its current incarnation. It’s not exactly a cake-walk, considering I drew up this chart while compiling my notes:

But all the tools are there. Anyone with a dozen servers or VMs can build a scalable data store without writing their own distribution implementation. That’s hard to ignore.

For Postgres, it really boils down to the citus extension and Postgres-XL. Unfortunately citus has severe limitations with joins including non-distributed tables. Likewise, Postgres-XL has been trying to merge the scaling code into the 9.5 branch for a year now. Despite the increased visibility and presumed quality of 2ndQuadrant’s work, there are a couple of orders of magnitude fewer eyes on that code. Even assuming they manage to merge everything before 9.6 drops, will they be able to keep up with core afterwards? Citus has its limitations, but in the end, it is an extension anyone can install. Anything that isn’t an extension risks falling woefully behind or being abandoned.

Is there another way? If we treated Postgres backends like dumb containers the way MongoDB does, things suddenly change quite a bit. MongoDB has a process called mongos which is basically just a glorified proxy that manages shard metadata and forks out a balancing job that ushers data around to ensure shard content isn’t lopsided. That’s the “Manager / Balancer” in my chart. What if it also included a transaction manager?

Postgres has supported background workers since 9.3. In theory, it could adopt management of shard metadata, chunk migration, or global transaction roles where appropriate. This is essentially what Postgres-XL (and similar) is doing. Where MongoDB wrote a glue process to bring everything together, Postgres-XL (and its ilk) opted to directly patch the Postgres source, and all the pain that entails.

I’m in danger of being too dismissive of implementation details, of course. Yet there are massive potential benefits to taking a cluster approach by default. Such a distribution proxy could be database agnostic, compatible with any engine that speaks SQL. Consider too that Postgres foreign data wrappers imply it could be the communication layer.

Last Thoughts

Postgres is tantalizingly close to being a cluster system simply through the communication protocols it provides. It just needs that final piece. I’m fully convinced that someone with the right skillset (definitely not me) could whip up a proof of concept for this in a few days. This is, after all, how most application-level sharding approaches work anyway. In the end, that’s all MongoDB, and several other NoSQL solutions ultimately provide. Take one part integrated RAFT process to always present a writable primary, and mix in a data distribution and balancing proxy. Done.

If they so desired, the MongoDB developers could theoretically even use Postgres as a back-end storage mechanism. In fact, I highly encourage them to do so! Given a natural shard split and extensions like cstore_fdw, suddenly NoSQL becomes a scalable column-store.

Barring that, corporate use of Postgres will be necessarily limited. There’s only so much you can do in a single instance, and without integral scaling features, it becomes migration time. I don’t want to tell our devs to convert all of their ORMs and queries to JSON, give up easy joins, or forgo NULL handling, but I will. I have to. This 50TB is only growing every day, and without a straight-forward and reliable migration path, we need to retool.

I disagree with too many of the data compromises MongoDB makes to use it for important data, but the availability approach is fundamentally sound. Mongo’s implementation is refreshing and innovative, and one which many NoSQL engines appear to share. Is it any wonder there has been a silent transition to these systems, when everything has several billion rows, and applications fully expect to reach well beyond Terabytes of storage into Petabyte territory?

No single instance can absorb that kind of volume, and not everyone has time or inclination to work around that limitation. We need a common middle-ground with the NoSQL world. We have the better storage engine, and they have the better distribution and availability mechanisms. Can’t there be a place where both of those things work in tandem?

I, for one, welcome our cluster overlords.

Oskari Saarenmaa: Backing up tablespaces and streaming WAL with PGHoard

Planet PostgreSQL - 22 July, 2016 - 14:32
We've just released a new version of PGHoard, the PostgreSQL cloud backup tool we initially developed for Aiven and later open sourced.

Version 1.4.0 comes with the following new features:
  • Support for PostgreSQL 9.6 beta3
  • Support for backing up multiple tablespaces
  • Support for StatsD and DataDog metrics collection
  • Basebackup restoration now shows download progress
  • Experimental new WAL streaming mode walreceiver, which reads the write-ahead log data directly from the PostgreSQL server using the streaming replication protocol
  • New status API in the internal REST HTTP server
Please see our previous blog post about PGHoard for more information about the tool and a guide for deploying it.

Backing up multiple tablespaces This is the first version of PGHoard capable of backing up multiple tablespaces. Multiple tablespaces require using the new local-tar backup option for reading files directly from the disk instead of streaming them using pg_basebackup as pg_basebackup doesn't currently allow streaming multiple tablespaces without writing them to the local filesystem.

The current version of PGHoard can utilize the local-tar backup mode only on a PG master server, PostgreSQL versions prior to 9.6 don't allow users to run the necessary control commands on a standby server without using the pgespresso extension. pgespresso also required fixes which we contributed to support multiple tablespaces - once a fixed version has been released we'll add support for it to PGHoard.

The next version of PGHoard, due out by the time of PostgreSQL 9.6 final release, will support local-tar backups from standby servers, natively when running 9.6 and using the pgespresso extension when running older versions with the latest version of the extension.

A future version of PGHoard will support backing up and restoring PostgreSQL basebackups in parallel mode when using the local-tar mode.  This will greatly reduce the time required for setting up a new standby server or restoring a system from backups.

Streaming replication supportThis version adds experimental support for reading PostgreSQL's write-ahead log directly from the server using the streaming replication protocol which is also used by PostgreSQL's native replication and related tools such as pg_basebackup and pg_receivexlog. The functionality currently depends on an unmerged psycopg2 pull request which we hope to see land in a psycopg2 release soon.

While the walreceiver mode is still experimental it has a number of benefits over other methods of backing up the WAL and allows implementing new features in the future: temporary, uncompressed, files as written by pg_receivexlog are no longer needed saving disk space and I/O and incomplete WAL segments can be archived at specified intervals or, for example, whenever a new COMMIT appears in the WAL stream.

New contributorsThe following people contributed their first patches to PGHoard in this release:
  • Brad Durrow
  • Tarvi Pillessaar

PGHoard in Aiven.io We're happy to talk more about PGHoard and help you set up your backups with it.  You can also sign up for a free trial of our Aiven.io PostgreSQL service where PGHoard will take care of your backups.


Cheers,
Team Aiven

Joshua Drake: The fall of Open Source

Planet PostgreSQL - 21 July, 2016 - 22:15

Once upon a time FOSS was about Freedom. It was about exposing equality within source code. It allowed everyone equal rights and equal access to the technology they were using. An idea that if you were capable, you could fix code or pay someone to fix code. An ideology that there was something greater than yourself and that there was an inherent right built into what it is to be human with software.

Leaders to lemmings

I sat in a bar slowly nursing beers with other community members over a period of hours. We spoke of many things. We spoke of the never-done new PostgreSQL website. We spoke of my distaste for Amazon Web Services since reformed, with the exception of S3. We spoke of life. We argued, we had honest discourse and that is excellent. There was nobody complaining of political correctness. There was nobody claiming to be “offended”. There was nobody leaving because their feelings were hurt. There was a community member who passed out in his chair and dropped his phone. We walked him to his room to make sure he was safe. All was good.

This retrospective has been digging around in my grey matter since that night six months ago. Originally this was going to just be the stuff of legendary and exaggerated stories among community members that are only getting older and a few who are young but will get there someday. That is, until it began to itch, and as with any good community member, I am scratching that itch.

“My time is precious to me”

It seems like a harmless thing to say. Of course your time is precious to you. I would say that is probably true of most people. I know that my time is precious to me. I make it a point of working part time from May - September so I can take time for my family. (Don’t worry, I more than make up for it the rest of the year).

The problem with the statement is the context. The statement came from a well known contributor and a very smart guy. The reference was in relation to why someone would use software as a service and the general idea was: Software as a Service is awesome because it allows me to have more time for me.

The great compromise

A lot of companies have come up through the ranks to become dominant players in the Open Source industry: Meetup.com for user groups, Github for development, Heroku for software a service and Slack for communications. When considered independently there is nothing wrong with these services. They offer a great value, they increase productivity, more code gets developed, more software gets released and communities grow.

The problem is that not a single one of these services are open source. The use of these services creates an intrinsic advocate position for closed source software. In turn you will see the use of these services increase whilst the use of open source alternatives decrease.

Consider Slack, which is widely considered the hot new collaboration tool. Yet, it does not adhere to open standards, its network is closed as is its software. Yes, you can interoperate with it using open source tools as well as standard protocols but in no way is it actually open in the sense of our community or our licensing. The argument is, “I use slack because there are no other tools like it”.

XMPP (Jabber) which is Open Source and a standard IETF protocol (RFC 3920) can provide a similar environment as Slack. It supports Video, Voice, Plugins, External protocols and bridges, Image embedding, Video sharing, File sharing and yes, Chat. It also supports federation which allows any community to communicate with any other community using XMPP.

I appreciate the PostgreSQL community. The PostgreSQL community hosts its own code repositories, website, and mailing lists. We collaborate in the true vision of Open Source and actively reject moving our project to externally hosted facilities controlled by services which are not Open Source. The community does it even though it may be quicker or more convenient to use a service. The community puts forth the effort for the community. There is an ideology that is about fairness, freedom, equality, rights, and the greater good.  

And that, was the fall of Open Source

The moment that Open Source becomes primarily about “my time” is the moment that Open Source is no longer a movement. It is no longer an ideology. It is no longer about fairness, freedom, equality, rights, or the greater good.  

gabrielle roth: PDXPUG: 10th anniversary meeting tonight!

Planet PostgreSQL - 21 July, 2016 - 17:35

We’re having our 10th Anniversary meeting tonight!

As in previous years, I generated speaker & topic tag clouds:

When: 6-8:30pm Thursday July 21, 2016 Where: iovation What: 10th Anniversary Social This meeting is 100% social and will take the place of our regular July meeting.  iovation will provide a light dinner and beverages.  We will provide the conversation and reminiscing.  There will not be any long speeches, but there will be cupcakes. iovation is on the 32nd floor of the US Bancorp Tower at 111 SW 5th (5th & Oak).  It’s right on the Green & Yellow Max lines.  Underground bike parking is available in the parking garage;  outdoors all around the block in the usual spots.  No bikes in the office, sorry!  Elevators open at 5:45 and building security closes access to the floor at 6:30. See you there!

Gulcin Yildirim: Evolution of Fault Tolerance in PostgreSQL: Time Travel

Planet PostgreSQL - 21 July, 2016 - 11:00

PostgreSQL is an awesome project and it evolves at an amazing rate. We’ll focus on evolution of fault tolerance capabilities in PostgreSQL throughout its versions with a series of blog posts. This is the third post of the series and we’ll talk about timeline issues and their effects on fault tolerance and dependability of PostgreSQL.

If you would like to witness the evolution progress from the beginning, please check the first two blog posts of the series:

  1. Evolution of Fault Tolerance in PostgreSQL 
  2. Evolution of Fault Tolerance in PostgreSQL: Replication Phase 

Timelines

The ability to restore the database to a previous point in time creates some complexities which we’ll cover some of the cases by explaining failover (Fig. 1), switchover (Fig. 2) and pg_rewind (Fig. 3) cases later in this topic.

For example, in the original history of the database, suppose you dropped a critical table at 5:15PM on Tuesday evening, but didn’t realise your mistake until Wednesday noon. Unfazed, you get out your backup, restore to the point-in-time 5:14PM Tuesday evening, and are up and running. In this history of the database universe, you never dropped the table. But suppose you later realize this wasn’t such a great idea, and would like to return to sometime Wednesday morning in the original history. You won’t be able to if, while your database was up-and-running, it overwrote some of the WAL segment files that led up to the time you now wish you could get back to.

Thus, to avoid this, you need to distinguish the series of WAL records generated after you’ve done a point-in-time recovery from those that were generated in the original database history.

To deal with this problem, PostgreSQL has a notion of timelines. Whenever an archive recovery completes, a new timeline is created to identify the series of WAL records generated after that recovery. The timeline ID number is part of WAL segment file names so a new timeline does not overwrite the WAL data generated by previous timelines. It is in fact possible to archive many different timelines.

Consider the situation where you aren’t quite sure what point-in-time to recover to, and so have to do several pointin-time recoveries by trial and error until you find the best place to branch off from the old history. Without timelines this process would soon generate an unmanageable mess. With timelines, you can recover to any prior state, including states in timeline branches that you abandoned earlier.

Every time a new timeline is created, PostgreSQL creates a ”timeline history” file that shows which timeline it branched off from and when. These history files are necessary to allow the system to pick the right WAL segment files when recovering from an archive that contains multiple timelines. Therefore, they are archived into the WAL archive area just like WAL segment files. The history files are just small text files, so it’s cheap and appropriate to keep them around indefinitely (unlike the segment files which are large). You can, if you like, add comments to a history file to record your own notes about how and why this particular timeline was created. Such comments will be especially valuable when you have a thicket of different timelines as a result of experimentation.

The default behaviour of recovery is to recover along the same timeline that was current when the base backup was taken. If you wish to recover into some child timeline(that is, you want to return to some state that was itself generated after a recovery attempt), you need to specify the target timeline ID in recovery.conf. You cannot recover into timelines that branched off earlier than the base backup.

For simplifying timelines concept in PostgreSQL, timeline related issues in case of failover, switchover and pg_rewind are summarised and explained with Fig.1, Fig.2 and Fig.3.

Failover scenario:

 Fig.1 Failover 

  • There are outstanding changes in the old master (TL1)
  • Timeline increase represents new history of changes (TL2)
  • Changes from the old timeline can’t be replayed on the servers that switched to new timeline
  • The old master can’t follow the new master
Switchover scenario:

 Fig.2 Switchover

  • There are no outstanding changes in the old master (TL1)
  • Timeline increase represents new history of changes (TL2)
  • The old master can become standby for the new master
pg_rewind scenario:

Fig.3 pg_rewind

  • Outstanding changes are removed using data from the new master (TL1)
  • The old master can follow the new master (TL2)
pg_rewind

pg_rewind is a tool for synchronising a PostgreSQL cluster with another copy of the same cluster, after the clusters’ timelines have diverged. A typical scenario is to bring an old master server back online after failover, as a standby that follows the new master.

The result is equivalent to replacing the target data directory with the source one. All files are copied, including configuration files. The advantage of pg_rewind over taking a new base backup, or tools like rsync, is that pg_rewind does not require reading through all unchanged files in the cluster. That makes it a lot faster when the database is large and only a small portion of it differs between the clusters.

How it works?

The basic idea is to copy everything from the new cluster to the old cluster, except for the blocks that we know to be the same.

  1. Scan the WAL log of the old cluster, starting from the last checkpoint before the point where the new cluster’s timeline history forked off from the old cluster. For each WAL record, make a note of the data blocks that were touched. This yields a list of all the data blocks that were changed in the old cluster, after the new cluster forked off.
  2. Copy all those changed blocks from the new cluster to the old cluster.
  3. Copy all other files such as clog and configuration files from the new cluster to the old cluster, everything except the relation files.
  4. Apply the WAL from the new cluster, starting from the checkpoint created at failover. (Strictly speaking, pg_rewind doesn’t apply the WAL, it just creates a backup label file indicating that when PostgreSQL is started, it will start replay from that checkpoint and apply all the required WAL.)

Note: wal_log_hints must be set in postgresql.conf for pg_rewind to be able to work.This parameter can only be set at server start. The default value is off.

Conclusion

In this blog post, we discussed timelines in Postgres and how we handle failover and switchover cases. We also talked about how pg_rewind works and its benefits to Postgres fault tolerance and dependability. We’ll continue with synchronous commit in the next blog post.

References

PostgreSQL Documentation
PostgreSQL 9 Administration Cookbook – Second Edition
pg_rewind Nordic PGDay presentation by Heikki Linnakangas

PostgreSQL 9.6 Beta 3 Released

PostgreSQL News - 21 July, 2016 - 03:00

The PostgreSQL Global Development Group announces today that the third beta release of PostgreSQL 9.6 is available for download. This release contains previews of all of the features which will be available in the final release of version 9.6, including fixes to many of the issues found in the first and second betas. Users are encouraged to continue testing their applications against 9.6 beta 3.

Changes Since Beta 2

Our users and contributors tested the first two betas, and found and reported many bugs and usability issues. This beta release fixes many of those issues. If you reported an issue, please test this beta to ensure that it's completely fixed.

This included many more fixes to parallel query:

  • adding support for TABLESAMPLE
  • correcting cost data in Gather nodes
  • check parallel-safety of all appended relations
  • fix planner issues with initPlans
  • fix several issues with client_encoding
  • clean up EXPLAIN output for parallel queries
  • multiple changes to parallel aggregation internals

Among the other issues addressed were:

  • add conninfo field to pg_stat_wal_receiver
  • clear all-frozen visibilitymap status when locking tuples
  • correctly dump database and tablespace ACLs
  • avoid invalidating all foreign-join cached plans when user mappings change
  • fix nested NOT operation cleanup in tsquery
  • allow IMPORT FOREIGN SCHEMA within pl/pgsql
  • print a given subplan only once in EXPLAIN
  • change the GetForeignUpperPaths API
  • fix several issues in postgres_fdw
  • added many additional regression tests
  • many documentation updates and clarifications
  • multiple translation updates for the docs

Due to changes in system catalogs, a pg_upgrade or dump and restore will be required for users migrating databases from earlier betas. This includes a version update to the pg_control tool, making it incompatible with PostgreSQL 9.4 and earlier. This beta also includes patches to many issues shared with earlier versions, which will be updated in an upcoming minor release.

Note that some known issues remain unfixed. Before reporting a bug in the beta, please check the Open Items page.

Beta Schedule

This is the third beta release of version 9.6. The PostgreSQL Project will release additional betas as required for testing, followed by one or more release candidates, until the final release in late 2016. For further information please see the Beta Testing page.

Links

Peter Eisentraut: Using GNU GLOBAL with PostgreSQL

Planet PostgreSQL - 20 July, 2016 - 15:00

When you are coding in a source tree as big as PostgreSQL’s, you will at some point want to look into some kind of source code indexing. It’s often convenient not to bother, since git grep is actually superfast. But when you want to find where a function is defined among all the call sites, some more intelligence is useful.

The traditional tools for this are ctags and etags, which create index files intended for use by vi and Emacs, respectively. The PostgreSQL source tree has some customized support for these in the tools src/tools/make_ctags and src/tools/make_etags. Because these tools operate on a directory level, those wrapper scripts create a single tag file (named tags or TAGS respectively) in the top-level directory and symlink it to all the other directories. This allows you to easily look for entries across the entire source tree. But it’s clearly a hack, and at least Emacs is often somewhat confused by this setup.

But there is something much better that works very similarly: GNU GLOBAL. A main difference is that GNU GLOBAL works on a project basis not on a directory basis, so you don’t need to do contortions to create and manage tags files all over your source tree. Also, GLOBAL can be used from the command line, so you don’t need to be an editor wizard to get started with it. Plus, it appears to be much faster.

The whole thing is very simple. Install the package, which is usually called global and available in most operating system distributions. To start, run

$ gtags

in the top-level directory. This creates the files GPATH, GRTAGS, and GTAGS.

Then you can use global to search for stuff, like

$ global elog src/include/utils/elog.h

Or you can look for places a function is called:

$ global -r write_stderr

You can run global in any directory.

Or how about you want to look at the code where something is defined:

$ less -t elog

Note no file name is required. (See the manual for the required setup to make this work with less.)

Or of course use editor integration. For Emacs, there is ggtags-mode.

Here is some fine-tuning for use with the PostgreSQL source tree. Generally, I don’t want to index generated files. For example, I don’t want to see hits in gram.c, only in gram.y. Plus, you don’t want to index header files under tmp_install. (Super annoying when you use this to jump to a file to edit and later find that your edits have been blown away by make check.) But when you run gtags in a partially built tree, it will index everything it finds. To fix that, I have restricted gtags to only index files that are registered in Git, by first running

git ls-files >gtags.files

in the top-level directory. Then gtags will only consider the listed files.

This will also improve the workings of the Emacs mode, which will at random times call global -u to update the tags. If it finds a gtags.files file, it will observe that and not index random files lying around.

I have a shell alias pgconfigure which calls configure with a bunch of options for easier typing. It’s basically something like

pgconfigure() { ./configure --prefix=$(cd .. && pwd)/pg-install --enable-debug --enable-cassert ... }

At the end I call

git ls-files >gtags.files gtags -i &

to initialize the source tree for GNU GLOBAL, so it’s always there.

Crunchy Delivers First Commercially Available 100% Open Source RDBMS To Receive Common Criteria Certification

PostgreSQL News - 20 July, 2016 - 03:00
Crunchy Certified PostgreSQL Receives Level 2 + Certification, Meeting Common Criteria’s Rigid Standards and Organizations’ Security Needs

Charleston, SC — July 19, 2016 — Crunchy Data, a leading provider of trusted open source PostgreSQL and PostgreSQL technology, support and training, today announced that Crunchy Certified PostgreSQL, the company’s commercially available distribution of PostgreSQL 9.5, has been awarded Common Criteria Evaluation Assurance Level (EAL) 2+ certification, an international standard for computer security certification. It is the first commercially available 100% open source relational database management system to receive Common Criteria certification.

For more information please see the full press release here.

Gabriele Bartolini: Speed up getting WAL files from Barman

Planet PostgreSQL - 19 July, 2016 - 17:00

Starting from Barman 1.6.1, PostgreSQL standby servers can rely on an “infinite” basin of WAL files and finally pre-fetch batches of WAL files in parallel from Barman, speeding up the restoration process as well as making the disaster recovery solution more resilient as a whole.

The master, the backup and the standby

Before we start, let’s define our playground. We have our PostgreSQL primary server, called angus. A server with Barman, called barman and a third server with a reliable PostgreSQL standby, called chris – for different reasons, I had to rule out the following names bon, brian, malcolm, phil, cliff and obviously axl. ;)

angus is a high workload server and is continuously backed up on barman, while chris is a hot standby server with streaming replication from angus enabled. This is a very simple, robust and cheap business continuity cluster that you can easily create with pure open source PostgreSQL, yet capable of reaching over 99.99% uptime in a year (according to our experience with several customers at 2ndQuadrant).

What we are going to do is to instruct chris (the standby) to fetch WAL files from barman whenever streaming replication with angus is not working, as a fallback method, making the entire system more resilient and robust. Most typical examples of these problems are:

  1. temporary network failure between chris and angus;
  2. prolonged downtime for chris which causes the standby to go out of sync with angus.

For further information, please refer to the Getting WAL files from Barman with ‘get-wal’ blog article that I wrote some time ago.

Technically, we will be configuring the standby server chris to remotely fetch WAL files from barman as part of the restore_command option in the recovery.conf file. Since the release of Barman 1.6.1 we can take advantage of parallel pre-fetching of WAL files, which exploits network bandwidth and reduces recovery time of the standby.

Requirements

This scenario has been tested on Linux systems only, and requires:

  • Barman >= 1.6.1 on the barman server
  • Python with argparse module installed (available as a package for most Linux distributions) on chris
  • Public Ssh key of the postgres@chris user in the ~/.ssh/authorized_keys file of the barman@barman user (procedure known as exchange of Ssh public key)
Installation

As postgres user on chris download the script from our Github repository in your favourite directory (e.g. ~postgres/bin, or /var/lib/pgsql/bin directly) with:

cd ~postgres/bin wget https://raw.githubusercontent.com/2ndquadrant-it/barman/master/scripts/barman-wal-restore.py chmod +700 barman-wal-restore.py

Then verify it is working:

./barman-wal-restore.py -h

You will get this output message:

usage: barman-wal-restore.py [-h] [-V] [-U USER] [-s SECONDS] [-p JOBS] [-z] [-j] BARMAN_HOST SERVER_NAME WAL_NAME WAL_DEST This script will be used as a 'restore_command' based on the get-wal feature of Barman. A ssh connection will be opened to the Barman host. positional arguments: BARMAN_HOST The host of the Barman server. SERVER_NAME The server name configured in Barman from which WALs are taken. WAL_NAME this parameter has to be the value of the '%f' keyword (according to 'restore_command'). WAL_DEST this parameter has to be the value of the '%p' keyword (according to 'restore_command'). optional arguments: -h, --help show this help message and exit -V, --version show program's version number and exit -U USER, --user USER The user used for the ssh connection to the Barman server. Defaults to 'barman'. -s SECONDS, --sleep SECONDS sleep for SECONDS after a failure of get-wal request. Defaults to 0 (nowait). -p JOBS, --parallel JOBS Specifies the number of files to peek and transfer in parallel. Defaults to 0 (disabled). -z, --gzip Transfer the WAL files compressed with gzip -j, --bzip2 Transfer the WAL files compressed with bzip2

If you get this output, the script has been installed correctly. Otherwise, you are most likely missing the argparse module in your system.

Configuration and setup

Locate the recovery.conf in chris and properly set the restore_command option:

restore_command = "/var/lib/pgsql/bin/barman-wal-restore.py -p 8 -s 10 barman angus %f %p"

The above example will connect to barman as barman user via Ssh and execute the get-wal command on the angus PostgreSQL server backed up in Barman. The script will pre-fetch up to 8 WAL files at a time and, by default, store them in a temporary folder (currently fixed: /var/tmp/barman-wal-restore).

In case of error, it will sleep for 10 seconds. Using the help page you can learn more about the available options and tune them in order to best fit in your environment.

Verification

All you have to do now is restart the standby server on chris and check from the PostgreSQL log that WALs are being fetched from Barman and restored:

Jul 15 15:57:21 chris postgres[30058]: [23-1] LOG: restored log file "00000001000019EA0000008A" from archive

You can also peek in the /var/tmp/barman-wal-restore directory and verify that the script has been executed.

Even Barman logs contain traces of this activity.

Conclusions

This very simple Python script that we have written and is available under GNU GPL 3 makes the PostgreSQL cluster more resilient, thanks to the tight cooperation with Barman.

It not only provides a stable fallback method for WAL fetching, but it also protects PostgreSQL standby servers from the infamous 255 error returned by Ssh in the case of network problems – which is different than SIGTERM and therefore is treated as an exception by PostgreSQL, causing the recovery process to abort (see the “Archive Recovery Settings” section in the PostgreSQL documentation).

Stay tuned with us and with Barman’s development as we continue to improve disaster recovery solutions for PostgreSQL. We would like to thank our friends at Subito.it, Navionics and Jobrapido for helping us with the development of this important feature, as well as many others 2ndQuadrant customers who we cannot mention due to non disclosure agreements but still continue to support our work.

Side note: hopefully I won’t have to change the way I name servers due to AC/DC continuously changing their formation. ;)

Simon Riggs: Report from DatabaseCamp, NYC

Planet PostgreSQL - 18 July, 2016 - 19:45

Just got back from extended trip to Database.Camp – Sunday, July 10, 2016 – at the UN in NYC – Totally Free!

Wide range of presentations from CTOs/CEOs from across the database software world, with about 200 attendees, slightly more people earlier in the day.

Very happy to report we had 3 presentations covering different aspects of PostgreSQL
* EnterpriseDB – Ed Boyajian – Building Postgres Ecosystems
* 2ndQuadrant – Simon Riggs – PostgreSQL: The Next 10 Years
* CartoDB – Javier de la Torre – Celebrating PostgreSQL and demonstrating execellent data visualizations

I don’t think any attendee would have missed the fact that 3 companies spoke about Postgres, whereas all other technologies were discussed only once. I noted that when asked “Do you use database X?”, more attendees hands went up for Postgres than any other. Also very happy to report that our talks interlocked nicely with one another, showing attendees that we can work together as a community.

Best talk of the day was Minerva Tantoco, CTO for the City of New York. Engaging, entertaining and greatly encouraging to see technology being adopted for the public good. Free Wifi in JFK would be nice… but looks like its coming across the whole of NYC, which will really change what is possible.

Well worth attending if they hold it again next year. Congrats to Eric for arranging, and making it for a good cause: Techie Youth.

damien clochard: Restarting PostgreSQL Magazine in a new format

Planet PostgreSQL - 18 July, 2016 - 18:17

TL;DR : With the kind support of PostgreSQL Europe, I am launching a new edition of the PostgreSQL Magazine in an innovative paper format. The primary goal is to publish it and send copies to PostgreSQL Conference Europe in Tallinn in November. If we’re sucessful, additional copies will be sent to other events.

I’m here to talk to you about the new direction for PG Magazine. So far, we’ve produced 2 issues of the magazine and we’ve received an entousiastic feedback from long-time community members as well as complete newcomers. I still believe that paper is a great medium to promote PostgreSQL and there’s a need for a regular printed media in the community.

However we’ve been struggling to find a stable economic model to ensure this media remains a community-driven effort. So far the balance we’ve found is that the content is produced, edited and controlled by community members, while the graphic editing, printing and shipping is done by professionnals and paid by non-profit associations (PG Europe, SPI, and others) or commercial sponsors (EDB for instance)

This model works with the current format of the magazine (36 pages) but it’s very slow. Producing a new issue requires a lot of time to gather the articles, edit the content and proof-read everything. It’s also very hard to translate. So far only the Chinese community has been able to fully translate the magazine.

Based on this considerations, here’s an attempt to make the whole process more dynamic while switching to a more appealing format.

Over the last month, I’ve worked on what I called a “folded magazine”. The basic idea is that the magazine is presented as an A5 newspaper. Then you unfold it and you get an A4 newspaper. Then you unfold it and you get an A3 newspaper. Then you unfold it and you get an A2 poster that you can take back at the office…

I used this concept for the 2016 PG Day France booklet. Here’s a video :

https://www.youtube.com/watch?v=X9tbCGsb2Ak

This format has drawbacks

a- The graphical editing needs to be done by professionnals. A PostgreSQL community member with basic graphic skill can no longer participate to this effort. Previously some Chinese guys where able to produce a translation of the magazine, this format will be harder for them

b- The printing needs to be done by professionals. You can no longer print it at home unless you’re the happy owner of an A2 printer.

c- The PDF version will be weird

But also many advantages :

1- It requires less content and we’ll be able to produce the magazine on a regular basis.

2- It’s easier to give away : giving away a 34 pages magazines to random stranger at a booth was sometime a waste of paper

3- The A2 poster is a great way to provide more visibility for PostgreSQL inside the PostgreSQL our users workplace.

4- It’s easier to translate

If you want to join the effort here’s an optimistic roadmap :

  • Step 1 : Gathering content (deadline 31/07/2016)
  • Step 2 : Editing Content (deadline 15/08/2016)
  • Step 3 : Reviewing (deadline 31/08/2016
  • Step 4 : Layout (deadline 24/09/2016)
  • Step 5 : Last Check (01/10/2016)
  • Step 6 : Printing (15/10/2016)

The 3th step (reviewing) and 5th step (last check) are the ones that require more manpower and time. So prepare yourself for some proof-reading in august and september :)

JDBC 1209 released

PostgreSQL News - 18 July, 2016 - 03:00

Notable changes include :

  • Many improvements to insert into .. values(?,?) -> insert .. values(?,?), (?,?)... rewriter. Give it a try by using reWriteBatchedInserts=true connection property. 2-3x improvements for insert batch can be expected
  • Full test suite passes against PostgreSQL 9.6, and OpenJDK 9
  • Performance optimization for timestamps (~TimeZone.getDefault optimization)
  • Allow build-from-source on GNU/Linux without maven repositories, and add Fedora Copr test to the regression suite

The full changelog can be found here

PostgreSQL Conference Europe Registration Open - Call for Papers is still open!

PostgreSQL News - 18 July, 2016 - 03:00

PostgreSQL Conference Europe 2016 in Tallin, Estonia, on November 1-4 is now accepting registrations for conference attendance at http://2016.pgconf.eu/registration/.

The Early Bird special price will be available until September 14th, and is limited to 75 tickets which is a great reason to get your registration in early!

There are also six training sessions available for registration - two full day and four half day sessions - for the Tuesday before the main event. These are all limited space sessions, so make sure you register early if you want to attend!

And don't forget - the call for papers is still open! If you have already submitted a talk, or are planning to submit one, we suggest you wait to register until you have received a confirmation if the talk was accepted or not. The early bird rate will be available long enough for you to register after you have received this notification - and if your talk is accepted, attendance is of course free!

We are accepting proposals for talks in English. Each session will last 45 minutes, and may be on any topic related to PostgreSQL.

The submission deadline is August 7th. Selected speakers will be notified before August 24th, 2016.

Please submit your proposals by going to http://2016.pgconf.eu/callforpapers/ and following the instructions.

The proposals will be considered by committee who will produce a schedule to be published nearer the conference date.

All selected speakers will get free entry to the conference (excluding training sessions). We do not in general cover travel and accommodations for speakers, but may be able to do that in limited cases. If you require assistance with funding to be able to attend, please make a note of this in the submission notes field.

The full call for papers is available on the website

As usual, if you have any questions, don't hesitate to contact us at contact@pgconf.eu.

Jan Wieck: My PL/pgSQL code is stuck, but where?

Planet PostgreSQL - 16 July, 2016 - 18:16
In the short time that I am enhancing the PL profiler (https://bitbucket.org/openscg/plprofiler) I have been asked multiple times if it can be abused as a debugging aid. Not directly. The conversation goes something like this:

Q: When my PL code is stuck somewhere, can I turn on profiling and see where?
A: No.
Q: Will this feature be added?
A: No.

Of course would that be a useful feature. I don't argue that. And it seems to be that this is precisely how Oracle users find out where their PL/SQL code gets stuck or moves like molasses. However, the reason why I am against adding this is because a profiler, or even parts of it, should not be enabled 24x7 on a production database server for the purpose of eventually using it to extract some debugging information some day. There are other ways to get that information and I will show you one of them.

If a program is stuck somewhere, one uses a debugger to get a backtrace. This works with C programs, like the database server, provided that symbol information is available. In that backtrace (and some variables) we also find the PL/pgSQL backtrace. Having symbol information available is also useful in case of a program crash, to find out why it crashed by loading the core dump into the debugger. 

Every line of PL code, that is executed, goes through the PL/pgSQL executor function exec_stmt(). At that place we find the current PL code's line in a variable. Likewise every function execution goes through either plpgsql_exec_function() or plpgsql_exec_trigger(). In those stack frames we find the OID of the function as well as its signature (name and call argument types).

Doing this eventually several times for a deeply nested PL/pgSQL program is tedious and no fun. So here is a little script called plbacktrace (https://github.com/wieck/plbacktrace.git) that does all of that. It is invoked with the PID of the PostgreSQL backend and will output information like



[postgres@db1 tmp]$ ./plbacktrace.py 13239fn_oid=105672 lineno=5 func="life_get_cell_pl(life_board,integer,integer)"fn_oid=105673 lineno=12 func="life_num_neighbors_pl(life_board,integer,integer)"fn_oid=105675 lineno=11 func="life_tick_pl(life_board)"fn_oid=105674 lineno=22 func="life_pl(integer,integer,boolean)" I will explain another time why I have a database server playing Conway's Game of Life written in PL/pgSQL. 


Regards, Jan

Shaun M. Thomas: PG Phriday: A Postgres Persepctive on MongoDB

Planet PostgreSQL - 15 July, 2016 - 23:06

I’ve been almost exclusively a Postgres DBA for a seemingly interminable length of time. While this is good for specializing, nobody wants to be a One-Trick Pony. And aside from the occasional bit of Python to write more advanced tools when Bash isn’t up to the job, it’s All Postgres All The Time. While few things would make me happier, it pays to branch out occasionally.

When NoSQL databases hit the scene a few years ago, I pretty much ignored them wholesale. I read about what they did and how they did it, and while intriguing, none of that seemed particularly better than what Postgres delivered. Cassandra could scale outwards, but has no real concept of NULL, and limits searches to “indexed” columns. Yet sites like Reddit prove how well it can scale when properly leveraged. MongoDB is basically a native JavaScript filesystem, which I ignored because Postgres support of JSON and JSONB effectively deprecated it before it was even on my radar. There are others of course, in CouchDB, Redis, HBase, and a plethora of alternatives.

There’s only so much time in a day though, and being as we use MongoDB for a couple of projects, it only makes sense to investigate how it really works. What can a Postgres user, who eats, sleeps, and breathes Postgres, hope to learn from a NoSQL database? I guess we can start with some simple and basic operations. Can I make a table, fill it with 1-million rows, update a few, delete a few, add an extra index for future searches, and perform a query or two?

Let’s start with the Postgres schema I use for practically everything:

CREATE TABLE sensor_log ( id SERIAL NOT NULL PRIMARY KEY, location VARCHAR NOT NULL, reading BIGINT NOT NULL, reading_date TIMESTAMP NOT NULL );   \timing   INSERT INTO sensor_log (id, location, reading, reading_date) SELECT s.id, s.id % 1000, s.id % 100, CURRENT_DATE - ((s.id * 10) || 's')::INTERVAL FROM generate_series(1, 1000000) s(id);   TIME: 7236.906 ms   CREATE INDEX idx_sensor_log_date ON sensor_log (reading_date);   TIME: 804.044 ms

Perhaps the most appropriate question to ask at this junction would be: what is the MongoDB equivalent? Well, as strange as this might sound, there isn’t one. When I described MongoDB as a native JavaScript filesystem, I was being literal. There really are no tables, schemas, or databases as we might think of them. A MongoDB “collection” can be loosely interpreted as a table, in that it stores documents that can be thought of as rows, but that’s where the similarity ends.

A MongoDB collection has no structure at all. Let that sink in a bit. MongoDB collections can not be declared, and as such, literally anything can be stored in one. Check this out:

use mydb   db.myCol.insert( {animal: "cow", says: "moo"} ) db.myCol.insert( {length: 12, seven: [ 8, 9 ]} )   db.myCol.find()   { "_id" : ObjectId("5788fc361b81473a2b3d1998"), "animal" : "cow", "says" : "moo" } { "_id" : ObjectId("5788fc3a1b81473a2b3d1999"), "length" : 12, "seven" : [ 8, 9 ] }

While it’s in our best interests to keep similar documents restricted to certain collections, there’s no kind of constraint that enforces this. This makes it easy to keep tangentially related objects together, with dozens or hundreds of non-contiguous attributes, so long as there is some common expected core. Yet, we don’t have to. As a consequence, we can’t actually declare a predefined structure, required keys, or any other kind of constraint.

In fact, we can learn quite a lot from that previous code block. First of all, the “mydb” database was never defined. Neither was “myCol”, the collection used to store our bogus documents. Simply storing a document made them exist. It might be better to think of MongoDB databases and collections as root folders and subdirectories in a filesystem. When a document is stored, it triggers the creation of any necessary parent elements. Once instantiated, the database and collection will persist after being emptied unless explicitly dropped.

Another thing that’s hard to miss, is the presence of the _id field in our output. All MongoDB documents require one of these, and if we don’t specify one, MongoDB gracefully provides it. Of course, the default MongoDB injects is of limited utility due to its arbitrary nature, but that means we can override that behavior.

With all of that in mind, let’s create the same table and fill it with sample data:

use mydb   var currentDate = new Date(); currentDate.setHours(0,0,0,0) var batch = new Array()   var start = new Date() for (var i = 1; i <= 1000000; i++) { batch[i] = { _id: i, location: i % 1000, reading: i % 100, readingDate: new Date(currentDate - i * 10000) } }   db.sensorLog.insert(batch)   (new Date() - start) / 1000   31.791   start = new Date() db.sensorLog.ensureIndex( { readingDate: 1 } ) (new Date() - start) / 1000   3.2

We can see that for small data like this, Postgres is a bit more time-efficient for allocating new data. If we examine the filesystem, it’s also apparent this fake sensor data requires 448MB on MongoDB, while Postgres stored everything in 93MB. This is mostly because MongoDB manages storage by preallocating large segments under the assumption the empty space will soon be filled.

In any case, we learn even more from all of the above code. One benefit of MongoDB being a JavaScript native engine, is that we can directly manipulate objects using standard JavaScript syntax. It’s also convenient the insert method allows passing document arrays, as inserting them one-by-one was orders of magnitude slower. Unfortunately, that means the mongo shell doesn’t provide automatic timing output like psql provides.

On the other hand, MongoDB allows us to define whole libraries of local functions that would only be valid for the current session, giving us macro potential Postgres users can only imagine. The necessity of wrapping everything in some kind of SQL-valid statement can often be a hindrance.

Of course, a natural implication of using a JavaScript Engine is that object names are case sensitive. This is a given for nearly any language aside from SQL, but some of us will need to note the context change or we can get ourselves in trouble. This is another good reason to always name Postgres database objects in lowercase and use underscores. We don’t want to learn any inconsistent capitalization habits that may adversely affect us in other systems.

Let’s check out a few statements using Postgres and our sensor_log table.

\timing   UPDATE sensor_log SET reading = reading + 1 WHERE reading_date >= CURRENT_DATE - INTERVAL '8 days' AND reading_date < CURRENT_DATE - INTERVAL '7 days';   TIME: 114.671 ms   DELETE FROM sensor_log WHERE reading_date >= CURRENT_DATE - INTERVAL '9 days' AND reading_date < CURRENT_DATE - INTERVAL '8 days';   TIME: 16.248 ms   SELECT COUNT(*) FROM sensor_log;   COUNT -------- 991360 (1 ROW)   TIME: 111.820 ms   SELECT * FROM sensor_log WHERE reading_date < CURRENT_DATE - INTERVAL '2 week' ORDER BY reading_date ASC LIMIT 5 OFFSET 20;   id | location | reading | reading_date --------+----------+---------+--------------------- 999980 | 980 | 80 | 2016-03-21 06:16:40 999979 | 979 | 79 | 2016-03-21 06:16:50 999978 | 978 | 78 | 2016-03-21 06:17:00 999977 | 977 | 77 | 2016-03-21 06:17:10 999976 | 976 | 76 | 2016-03-21 06:17:20 (5 ROWS)   TIME: 1.034 ms

Let’s examine each MongoDB equivalent, with the timings in milliseconds. This first one is a translation of our UPDATE statement:

var currentDate = new Date() currentDate.setHours(0,0,0,0)   start = new Date() db.sensorLog.update({ readingDate: { $gte: new Date(currentDate.getTime() - 8*86400000), $lt: new Date(currentDate.getTime() - 7*86400000) } }, { $inc: { reading: 1 } }, { multi: true } ) (new Date() - start)   WriteResult({ "nMatched" : 8640, "nUpserted" : 0, "nModified" : 8640 }) 77

It’s a bit faster than our Postgres version, but a bit more inconvenient to write. One universal element with MongoDB is the increased complexity of statements. The primary tradeoff for having such convenient access to JavaScript in MongoDB, is that everything must be a JavaScript object. Much like Postgres forces us to wrap everything in a SQL-based statement, MongoDB demands even the simplest query invokes a JavaScript function.

In the case of the update function, the first parameter is the search, and the second is the list of changes we want to make. It’s here where we encounter constructs like $inc to increment the value of a field. There are several of these specialized operators we need for manipulating our data.

The last parameter to update is a series of flags that modify how the update operates. For instance, we enabled multi, which updates all matching rows, overriding the default of updating a single row. That seems fairly nonintuitive to a longtime user of SQL-based systems.

Let’s examine how deletes work:

start = new Date() db.sensorLog.remove({ readingDate: { $gte: new Date(currentDate.getTime() - 9*86400000), $lt: new Date(currentDate.getTime() - 8*86400000) } } ) (new Date() - start)   WriteResult({ "nRemoved" : 8640 }) 145

While the update was a bit faster than Postgres, the delete was much slower. Beyond that, this is very similar to the update we did earlier. Often in MongoDB, the first parameter to a function will be some kind of query in the form of a JSON object. We should also point out that the default operation of the remove function is to act on all matches, the exact opposite of how update works. There is an argument to only remove the first match, but we didn’t use it here.

Let’s keep going and check out how queries themselves work:

start = new Date() db.sensorLog.count() (new Date() - start)   2   start = new Date() db.sensorLog.find({ readingDate: { $lt: new Date(currentDate.getTime() - 14*86400000) } } ).sort({readingDate: 1}).skip(20).limit(5) (new Date() - start)   { "_id" : 999980, "location" : 980, "reading" : 80, "readingDate" : ISODate("2016-03-21T11:16:40Z") } { "_id" : 999979, "location" : 979, "reading" : 79, "readingDate" : ISODate("2016-03-21T11:16:50Z") } { "_id" : 999978, "location" : 978, "reading" : 78, "readingDate" : ISODate("2016-03-21T11:17:00Z") } { "_id" : 999977, "location" : 977, "reading" : 77, "readingDate" : ISODate("2016-03-21T11:17:10Z") } { "_id" : 999976, "location" : 976, "reading" : 76, "readingDate" : ISODate("2016-03-21T11:17:20Z") }   2

Obtaining counts is refreshingly simple. One of the cool things about functional approaches like this, is that operations are easily chainable. We can either use the count function directly, or tack it to the end to return the total matches for a long query. We can see such combinations in the second query with the inclusion of skip and limit where Postgres would use OFFSET and LIMIT.

There are a lot of differences here. Diverse tools are a great resource for varying project requirements. It should be fairly clear that MongoDB is geared toward tight integration into JavaScript projects. The incredibly loose document structure requirements make it a perfect target for storing polymorphic data that consists of highly varying fields or otherwise convoluted structure. Usually we would prefer to abstract such things into a relational schema suitable for a standard database like Postgres. Yet in the case of highly nested object representations, that might not be practical.

Despite this, there is some small resemblance between Postgres and MongoDB. Mongo even provides a mapping to relate similar concepts. While multiple methodologies have a suitable equivalent, others do not. We already mentioned that collections can be treated like tables, but since their structure is dictated by their contents, that’s not strictly true.

Other concepts like JOIN didn’t even exist until the latest MongoDB incarnation of 3.2. Even then, that functionality is only available during aggregation. The only other alternative is to embed nested queries within a foreach loop of a base result set. There’s actually a good reason for that, too. MongoDB is a document store, and as such, one could argue that elements we’re trying to look up are essential attributes of the underlying document. This is a world that operates in the land of denormalization, where doing otherwise is an impediment to the platform.

There’s always a bit of a “not quite” caveat when comparing elements between these two engines. I certainly learned a lot while digging around in a default MongoDB instance. While some features were cool and extremely convenient, others frustrated and confounded me. It’s certainly a different world over there in NoSQL land, yet the need for it is undeniable. This and many other reasons contributed to JSON and JSONB becoming integral parts of Postgres. With them, we get the best of both worlds; loose structure amorphic documents and relational logic to cement relationships and enforce at least some integrity within our datasets.

It’s a bit of a foreign concept and technically breaks normalization rules, but I’m willing to consider the potential benefits. Rules are, after all, made to be broken.

Ivan Lezhnjov IV: Upgrading Ubuntu LTS and PostgreSQL

Planet PostgreSQL - 15 July, 2016 - 15:57

If your PostgreSQL instance is running on an Ubuntu LTS system that you need to upgrade to the most recent release, say from precise to trusty – because, well, sooner or later you must – you need to consider what is going to happen to your database.

Note
The upgrade process described in this article is similar to what you would have to do if you were upgrading from Trusty to Xenial, the newest Ubuntu LTS release.

Ubuntu attempts to make the process of upgrading to the newest distribution release easy and hassle-free. In fact, it is the case in many situations but not when there is PostgreSQL running in the system. If you just go ahead and try to run do-release-upgrade command, which is the officially recommended way to upgrade your Ubuntu LTS distribution to the newest release, you will end up seeing this error message: 

...

Get:72 http://us.archive.ubuntu.com trusty-backports/universe Translation-en [34.6 kB]
Fetched 23.3 MB in 6s (0 B/s)

Checking package manager
Reading package lists... Done
Building dependency tree
Reading state information... Done
Building data structures... Done
Calculating the changes
Calculating the changes
Could not calculate the upgrade

An unresolvable problem occurred while calculating the upgrade.

This can be caused by:

* Upgrading to a pre-release version of Ubuntu
* Running the current pre-release version of Ubuntu
* Unofficial software packages not provided by Ubuntu

If none of this applies, then please report this bug using the
command 'ubuntu-bug ubuntu-release-upgrader-core' in a terminal.

Not very helpful, is it? Well, clearly you need to troubleshoot. Where do you start?

Examine /var/log/dist-upgrade/main.log and look for ERROR messages. This is what you will see: 

...
2016-02-09 07:19:01,392 DEBUG blacklist expr '^postgresql-.*[0-9]\.[0-9].*' matches 'postgresql-plperl-9.3'
2016-02-09 07:19:01,393 DEBUG The package 'postgresql-plperl-9.3' is marked for removal but it's in the removal blacklist
2016-02-09 07:19:01,462 ERROR Dist-upgrade failed: 'The package 'postgresql-plperl-9.3' is marked for removal but it is in the removal blacklist.'
...

That's more specific. Basically, it says if any postgresql-* package is installed abort the mission. This means that what was supposed to be an easy and hassle-free, one-command-only upgrade has become your duty to figure out how to do it. Yeah, the promise of simplicity that Ubuntu broadcasts far and wide in its marketing campaigns clearly has its terms and conditions that may apply. Anyway, roll up your sleeves. We gotta do some manual work here.

It is not a problem with do-release-upgrade per se. APT itself is configured on Ubuntu LTS to not do anything with PostgreSQL packages, like, for example, automatically removing postgresql-* packages:

/etc/apt/apt.conf.d/01autoremove-postgresql
// File installed by postgresql-common. Currently not updated automatically,
// but might be in future releases.
//
// We mark all PostgreSQL packages as NeverAutoRemove because otherwise apt
// would remove the old postgresql-x.y package when the "postgresql" meta
// package changes its dependencies to a new version, rendering the old
// database cluster inaccessible. As access to the cluster might depend on
// other modules (like datatypes), we use a pretty wide pattern here. We might
// tighten this to match only actually used PostgreSQL versions in the future.

APT
{
 NeverAutoRemove
 {
   "^postgresql-";
 };
};

It's a good thing, if you think about it. You don't want a OS upgrade tool, or even a package manager to mess with your database.

So, before you even attempt to run do-release-upgrade and hope it lives up to Ubuntu's promise of making your life easier, you need to plan upgrade of your PostgreSQL. As a new Ubuntu LTS release will most likely mean a new version of PostgreSQL, you need to carefully put together a plan for the upgrade.

It means you should really start by reading release notes of the new PostgreSQL version (available on the main page of postgresql.org) and see if you will need to do anything special to make sure your data isn't lost during the upgrade, and that any other requirement is met.

Once you have an idea of what's coming in the new version and what actions you may need to take to ensure that your upgrade is completed successfully, you can start working on getting your PostgreSQL server ready for the do-release-upgrade.

In most simple cases that don't require you to do anything special about PostgreSQL itself, it will come down to a number of steps:

  • Back up your data
  • apt-get remove postgresql-*
  • do-release-upgrade
  • Update PGDG repository configuration (precise->trusty)
  • apt-get update
  • apt-get -s dist-upgrade
  • Restore your data 
Back Up Your Data

You need to back up your cluster/databases and globals. You can read more about backups here. This time we're taking a look at how to use pg_dump. You can also upgrade by using pg_upgrade utility which we'll cover next time. For example, using pg_dumpall something like this will create a backup for globals and for every database in your cluster:

 

/usr/lib/postgresql/9.3/bin/pg_dumpall -g -Upostgres -p 5434 --file=globals.sql;
/usr/lib/postgresql/9.3/bin/psql -p 5434 -AtU postgres -c "SELECT datname FROM pg_database \
                         WHERE NOT datistemplate"| \
while read f;
  do /usr/lib/postgresql/9.3/bin/pg_dump -Upostgres -p 5434 --format=c --file=$f.sqlc $f;
done;

 

In this example, we use PostgreSQL 9.3 that was installed on Ubuntu Precise from PGDG repository. Upon successful completion of do-release-upgrade Ubuntu Trusty will have PostgreSQL 9.3 installed as its default PostgreSQL version from official Ubuntu repositories. In our simple test setup all data survived do-release-upgrade just fine and PostgreSQL works as expected, without any problems, after the upgrade of operating system. However, in most real-life scenarios you will probably be making a transition from a lower to a higher PostgreSQL version. In which case you would need to upgrade by dumping and reloading your data.

Remove PostgreSQL Packages

This must sound kinda scary, but the way to successfully run do-release-upgrade is by making sure PostgreSQL doesn't get in its way. You literally need to remove PostgreSQL packages before attempting to run do-release-upgrade.

Just to be safe, make sure you first back up postgresql.conf, pg_hba.conf and other important configuration files. In fact, create a copy of entire /etc/postgresql/9.3/ and /var/lib/postgresql/9.3. This will include data directory, which may be in gigabytes or terabytes. Well, you must have backups of your database anyway. Just make sure the backups are recent, in working state and there's a way to restore them in case APT messes up your data.

Once you're absolutely positive that you have backups of your entire database cluster and configuration, remove all PostgreSQL packages from the system:

$ sudo apt-get remove postgresql-* 

Note that apt-get remove will not delete your configuration, or data. At least it shouldn't. That's what apt-get purge does. The only reason we recommend to take backups is because it is a good practice and you don't really want to rely on APT and learn one day that an obscure bug or a change in its policy of doing the 'remove' action results in data loss.

Run do-release-upgrade

Here's a checklist to go over before running do-release-upgrade:

Ensure /boot has enough disk space.

If it doesn't, do-release-upgrade will fail and display this error message:

"Not enough free disk space
The upgrade has aborted. The upgrade needs a total of 54.2 M free
space on disk '/boot'. Please free at least an additional 13.9 M of
disk space on '/boot'. Empty your trash and remove temporary packages
of former installations using 'sudo apt-get clean'. " 3rd party software may break do-release-upgrade

PostgreSQL is one example. You will see do-release-upgrade notify you that PGDG APT source list was disabled in a typically non-specific fashion: 

"Third party sources disabled
Some third party entries in your sources.list were disabled. You can
re-enable them after the upgrade with the 'software-properties' tool
or your package manager." Open additional SSH session on port 1022 before do-release-upgrade starts upgrading packages

If you lose your current SSH session you can retain access to your system in the second window.

Run do-release-upgrade

When you're ready run: 

$ sudo do-release-upgrade Update PGDG Repository Configuration

Assuming do-release-upgrade went fine and you're using PGDG repositories, you will need to uncomment a line with repository source in /etc/apt/sources.list.d/pgdg.list and change precise-pgdg to trusty-pgdg. Your pgdg.list file would look then like this:

deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main Resynchronize The Package Index Files

At this point you should be running Ubuntu Trusty LTS and you need to run:

$ sudo apt-get update Install Updates and Upgrades

You will probably realize that postgresql-* packages in PGDG are newer version than those installed during do-release-upgrade. So, in case you want to make sure you're using the most recent PGDG version of PostgreSQl run: 

$ sudo apt-get -s dist-upgrade

Drop -s after making sure that proposed changes look good.

Restore Your Data

Finally restore your data using the newer version of PostgreSQL. For the sake of example let's assume that you wanted to upgrade to PostgreSQL 9.5 once you're running trusty.

Essentially, to start using the new PostgreSQL version you would:

  • apt-get install postgresql-9.5
  • Restore your globals

 Here an example of you could restore globals:

/usr/lib/postgresql/9.5/bin/psql -U postgres -p 5435 < /home/admin/do-release-upgrade/db-backups/globals.sql 
  • Restore databases:

Consider this example

for i in databaseA databaseB databaseC;do /usr/lib/postgresql/9.5/bin/psql -U postgres -p 5435 -c "CREATE DATABASE $;"; done  for i in databaseA databaseB databaseC;do /usr/lib/postgresql/9.5/bin/pg_restore -U postgres -p 5435 --dbname=$i $i.sqlc;done

Port number may be different on your system and these examples are merely a guideline.

Although it's a complete set of actions required to upgrade your database various factors unique to your setup may introduce other steps that need to be carried out in order to successfully upgrade PostgreSQL. There's no one-size-fits-all upgrade strategy and this post demonstrates just one of the simplest scenarios you may encounter.

If your case is particularly complex to the point where you would rather have us double-check your upgrade plan or actually perform it for you, feel free to contact us and we'll figure out the best path to upgrade your system running PostgreSQL to the latest release.

Stefan Petrea: Analyzing PostgreSQL logs using pgBadger

Planet PostgreSQL - 15 July, 2016 - 10:15
Table of Contents Intro

Most web applications make use of databases. When deploying your app in production, you will need, among other things, monitoring and analysis/reports of log data generated by your application.

This comes in handy when troubleshooting problems, in situations involving bottlenecks in large applications, where we need to find low-performing SQL queries.

Overview

This is a simplified diagram that shows the interactions between the main components of a typical web application production environment.

The users make requests to the web application. In order to serve the requests, the web application needs information from a database, in this case, the databased being used is PostgreSQL, so it makes queries to it.

The PostgreSQL database server computes the queries, returns back data, but at the same time, it also writes to log files on disk.

The log files are being rotated on a daily basis. So for each day, there is one log file. The log files start out empty. Each time they are being used again, they're truncated and new log data will be written to them.

User 1 User 2 User 3 PostgreSQL database server Web application PostgreSQL
rotating logs
  • Monday
  • Tuesday
  • Wednesday
  • Thursday
  • Friday
  • Saturday
  • Sunday
    • PostgreSQL logging and log rotation

      Most of the settings described here are documented in the 18.8 Error reporting and logging section of the PostgreSQL manual.

      We're turning on the logging collector, we ensure that we're have %w in the daily log filenames (this is the 0-based weekday represented as a decimal number, with 0 being sunday, 1 is monday and so on).

      Once the logging collector is enabled, the logs are no longer stored in /var/log/postgresql/ , but instead, they will be located at /var/lib/postgresql/9.4/main/pg_log/ .

      We're also aiming to truncate the file on rotation.

      logging_collector = on log_filename = 'postgresql-%w.log' log_file_mode = 0640 log_truncate_on_rotation = on log_rotation_size = 600MB log_min_duration_statement = 0 log_checkpoints = on log_connections = on log_disconnections = on log_duration = on log_lock_waits = on log_line_prefix = '%t [%p]: [%l-1] db=%d,user=%u,app=%a,client=%h ' Running pgbadger on a schedule

      In order to run pgbadger periodically, we need to schedule it in cron.

      After running, pgBadger 8.1 seems to leave its pidfile behind, but we can easily clean up the pidfile for it.

      We only process/analyze log files that were recently modified (at the latest, 180 minutes ago).

      The -R 5 parameter tells pgbadger that it should only retain the last 5 weeks of binary data and reports.

      And finally, we delete PostgreSQL log files older than 2 days. In combination with the Pg 7-day rotating logs, this means we only keep 2 PostgreSQL log files (since there's one Pg log file per day, given the way we've set up PostgreSQL logging).

      We may have multiple applications using the same database. Since we've set up PostgreSQL to log the application name as well, we can build separate pgbadger reports for each application.

      For example, if you're running the queries in a Bash script, you may use the PGAPPNAME environment variable, which can be passed on by psql when it makes queries to the PostgreSQL server and therefore, the queries will be logged with the application name.

      The application name is usually passed in the connection string at application level. For example, if you're using a language like Python and the psycopg2 PostgreSQL driver, you can pass application_name in the connection string. The Java Pg driver pgjdbc-ng allows you to pass the applicationName parameter in the jdbc connection string.

      #!/bin/bash PG_LOG_PATH=/var/lib/postgresql/9.4/main/pg_log/ PGBADGER="/home/user/bin/pgbadger --anonymize -q -R 5 " ROOT=/home/user/pgbadger check_pgbadger_pid() { if [[ -e '/tmp/pgbadger.pid' ]]; then PAST_PID=$(cat /tmp/pgbadger.pid) CURRENT_PIDS=$(pgrep -f pgbadger) if [[ $CURRENT_PIDS =~ $PAST_PID ]]; then echo "[ERROR] pgbadger still running" exit -1 else rm '/tmp/pgbadger.pid' fi fi } RECENTLY_MODIFIED_LOGS=$(find $PG_LOG_PATH -mmin -180 -name "postgresql*") mkdir -p "$ROOT" "$ROOT/bot" "$ROOT/metrics" "$ROOT/collector" "$ROOT/web" $PGBADGER -l "$ROOT/.pgbadger_last_state_file" -d test1 -X -I -O "$ROOT" $RECENTLY_MODIFIED_LOGS check_pgbadger_pid $PGBADGER -l "$ROOT/bot/.pgbadger_last_state_file" -d test1 -X -I --appname "upstats-bot" -O "$ROOT/bot" $RECENTLY_MODIFIED_LOGS check_pgbadger_pid $PGBADGER -l "$ROOT/metrics/.pgbadger_last_state_file" -d test1 -X -I --appname "upstats-metrics" -O "$ROOT/metrics" $RECENTLY_MODIFIED_LOGS check_pgbadger_pid $PGBADGER -l "$ROOT/collector/.pgbadger_last_state_file" -d test1 -X -I --appname "upstats-collector" -O "$ROOT/collector" $RECENTLY_MODIFIED_LOGS check_pgbadger_pid $PGBADGER -l "$ROOT/web/.pgbadger_last_state_file" -d test1 -X -I --appname "upstats-web" -O "$ROOT/web" $RECENTLY_MODIFIED_LOGS check_pgbadger_pid OLD_LOGS=$(find $PG_LOG_PATH -type f -mtime +2) if [ ! -z "$OLD_LOGS" ]; then rm $OLD_LOGS; fi

      After analyzing the logs, pgBadger will create a number of reports including:

      • Queries that took up the most time
      • Histogram of query times
      • Overall statistics
      • The most frequent waiting queries
      • Queries per user and total duration per user
      • Distribution of queries type per database/application
      • Queries by type (select/insert/update/delete)
      • SQL queries statistics
      Conclusion

      We've covered the usage of a log analysis tool called pgBadger in a PostgreSQL setup configured with rotating logs.

Craig Ringer: How to check the lock level taken by operations in PostgreSQL

Planet PostgreSQL - 15 July, 2016 - 04:24

PostgreSQL’s manual is generally pretty clear about the locks taken by various operations – but nothing’s perfect. If you need to, you can also ask PostgreSQL directly.

You can check lock levels trivially with psql or PgAdmin.

For example, to see what lock alter table some_table disable trigger some_trigger; takes:

test=> BEGIN; BEGIN test=> ALTER TABLE some_table DISABLE TRIGGER some_trigger; ALTER TABLE test=> SELECT locktype, mode FROM pg_locks WHERE pid = pg_backend_pid() AND relation = 'some_table'::regclass; locktype | mode ----------+----------------------- relation | ShareRowExclusiveLock (1 row)

That’s for a lock on a table. It’s easy to see from this that we take a SHARE ROW EXCLUSIVE lock, which according to the manual:

… protects a table against concurrent data changes, and is self-exclusive so that only one session can hold it at a time.

Other uses of pg_locks

You can also filter on pg_locks in other ways to see other locks held by a transaction. You’ll have to do some joins on pg_class etc to decode the relation OIDs to names – which is why we really need a pg_stat_locks view in PostgreSQL to make this easier.

People use a variety of canned queries of varying quality for looking to see which processes block others at the moment. With the addition of more detailed lock wait information and pg_blocking_pids() in 9.6, this will get a lot easier, though 9.6 doesn’t add a helper view yet.

What are the virtualxid and transactionid locks?

One important and possibly confusing thing you’ll see in pg_locks is that every transaction holds a special lock on its self, called the virtualxid lock:

test=> BEGIN; BEGIN test=> SELECT * FROM pg_locks WHERE pid = pg_backend_pid(); locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath ------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-------+-----------------+---------+---------- relation | 16386 | 11673 | | | | | | | | 2/3983 | 24250 | AccessShareLock | t | t virtualxid | | | | | 2/3983 | | | | | 2/3983 | 24250 | ExclusiveLock | t | t (2 rows) test=> select '11673'::regclass; regclass ---------- pg_locks (1 row)

As you can see, the relation AccessShareLock is just the lock we take on pg_locks when we query it, so you can ignore that.

The virtualxid lock is special. It’s a exclusive lock on the transaction’s own virtual transaction ID (the “2/3983″, above) that every transaction always holds. No other transaction can ever acquire it while the transaction is running. The purpose of this is to allow one transaction to wait until another transaction commits or rolls back using PostgreSQL’s locking mechanism, and it’s used internally. You don’t normally need to use it yourself, but it’s useful to understand what it is when you see it in pg_locks.

There’s a similar entry for transactions that get a real read/write transaction ID that other transactions can use to wait until they commit or roll back:

test=> select txid_current(); txid_current -------------- 2774 (1 row) test=> SELECT * FROM pg_locks WHERE pid = pg_backend_pid(); locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath ---------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-------+-----------------+---------+---------- relation | 16386 | 11673 | | | | | | | | 2/3984 | 24250 | AccessShareLock | t | t virtualxid | | | | | 2/3984 | | | | | 2/3984 | 24250 | ExclusiveLock | t | t transactionid | | | | | | 2774 | | | | 2/3984 | 24250 | ExclusiveLock | t | f (3 rows)

so if you’ve ever wondered what they are, now you know.

You’ll see a transactionid lock wait at when two concurrent transactions try to insert the same key into a unique index (or primary key), among other things. In that case the second transaction must wait until the first commits or rolls back to know whether it should fail with an error or continue to insert.

Database .NET 18.9 released!

PostgreSQL News - 15 July, 2016 - 03:00

I am happy to announce the new major release of Database .NET 18.9. It is an innovative, powerful and intuitive multiple database management tool, With it you can browse objects, design tables, edit rows, export data, run queries and migrate data with a consistent interface.

Free, All-In-One, Portable, Single executable file and Multi-language.

Major New features from version 18.1 to 18.9:

  • PostgreSQL 9.6 compatibility
  • Added Support for multi-dimensional arrays (PG)
  • Added Support for bigserial datatype (PG)
  • Added Support for Reporting Errors and Messages (PG)
  • Added Support for LISTEN (listen for a notification) (PG)
  • Added Support for browsing delimited text files
  • Added Automatic reconnection
  • Ability to Cancel executing command
  • Ability to Copy Selected Results to Clipboard
  • Ability to Calculate Selected Cells
  • Ability to Export Results to PSV/SSV (Pipe and Semicolon delimited) (.psv;.ssv)
  • Improved SQL History
  • Improved Query Builder
  • Improved Binding Procedure and Function parameters
  • Improved Transaction Mode
  • Improved Find In Grid
  • Improved Data Import and Export
  • Improved Data Editor and Browser
  • Improved Generating Batch Scripts
  • Improved SQL Editor
  • Improved AutoComplete and IntelliSense
  • Improved SQL File Version Control
  • Improved Database Migration and Data Synchronization (Pro)
  • ...and more
For more information please visit http://fishcodelib.com/Database.htm.

Hans-Juergen Schoenig: Inheritance – One more reason to love PostgreSQL

Planet PostgreSQL - 14 July, 2016 - 11:31

After doing full-time PostgreSQL consulting for over 16 years now, I actually don’t remember a time without inheritance anymore. Of course things were improved over time, but in my head it has always been there and it has always just worked as expected. After so many years I still love the feature because it offers […]

The post Inheritance – One more reason to love PostgreSQL appeared first on Cybertec - The PostgreSQL Database Company.

Syndicate content