Feed aggregator

Greg Sabino Mullane: DBD::Pg escaping placeholders with backslashes

Planet PostgreSQL - 12 January, 2015 - 23:07

Image by Rick Audet

The popularity of using JSON and JSONB within Postgres has forced a solution to the problem of question mark overload. JSON (as well as hstore) uses the question mark as an operator in its queries, and Perl DBI (esp. DBD::Pg) uses the question mark to indicate a placeholder. Version 3.5.0 of DBD::Pg has solved this by allowing the use of a backslash character before the question mark, to indicate it is NOT a placeholder. We will see some code samples after establishing a little background.

First, what are placeholders? They are special characters within a SQL statement that allow you to defer adding actual values until a later time. This has a number of advantages. First, it completely removes the need to worry about quoting your values. Second, it allows efficient re-use of queries. Third, it reduces network traffic as you do not need to send the entire query each time it is re-run. Fourth, it can allow for seamless translation of data types from Postgres to your client language and back again (for example, DBD::Pg translates easily between Perl arrays and Postgres arrays). There are three styles of placeholders supported by DBD::Pg - question marks, dollar-signs, and colon-names.

Next, what are Postgres operators? They are special symbols withing a SQL statement that perform some action using as inputs the strings to the left and right side of them. It sounds more complicated than it is. Take this query:

SELECT count(*) FROM pg_class WHERE relpages > 24;

In this case, the operator is ">" - the greater than sign. It compares the things on its left (in this case, the value of the relpages column) with the things on its right (in this case, the number 24). The operator will return true or false - in this case, it will return true only if the value on its left is larger than the value on its right. Postgres is extremely extensible, which means it is easy to add all types of new things to it. Adding your own operator is fairly easy. Here's an example that duplicates the greater-than operator, but with a ? symbol:

CREATE OPERATOR ? (procedure=int4gt, leftarg=integer, rightarg=integer);

Now the operator is ready to go. You should be able to run queries like this:

SELECT count(*) FROM pg_class WHERE relpages ? 24;

The list of characters that can make up an operator is fairly small. The documentation has the detailed rules, but the basic list is + - * / < > = ~ ! @ # % ^ & | ` ?. Note that an operator can consist of more than one character, for example, >=

A question mark inside a SQL query can be both a placeholder and an operator, and the driver has no real way to figure out which is which. The first real use of a question mark as an operator was with the geometric operators and then with the hstore module, which allows storing and querying of key/value pairs. It uses a lone question mark to determine if a given value appears as a key in a hstore column. For example, if the goal is to find all rows in which an hstore column contains the value foobar, the SQL would be:

SELECT * FROM mytable WHERE myhstorecol ? 'foobar';

However, if you were to try this via a Perl script using the question-mark placeholder style, DBD::Pg would get confused (and rightly so):

$sth = $dbh->prepare('SELECT * FROM mytable WHERE myhstorecol ? ?'); $sth->execute('foobar'); DBD::Pg::st execute failed: called with 1 bind variables when 2 are needed

Trying to use another placeholder style still does not work, as DBD::Pg still picks it up as a possible placeholder

$sth = $dbh->prepare('SELECT * FROM mytable WHERE myhstorecol ? $1'); $sth->execute('foobar'); Cannot mix placeholder styles "?" and "$1"

A few years ago, a solution was developed: by setting the database handle attribute "pg_placeholder_dollaronly" to true, DBD::Pg will ignore the question mark and only treat dollar-sign numbers as placeholders:

$dbh->{pg_placeholder_dollaronly} = 1; $sth = $dbh->prepare('SELECT * FROM mytable WHERE myhstorecol ? $1'); $sth->execute('foobar'); ## No error!

Then came JSON and JSONB. Just like hstore, they have three operators with question marks in them: ?, ?& and ?| - all of which will prevent the use of question-mark placeholders. However, some frameworks and supporting modules (e.g. SQL::Abstract and DBIx::Class) only support the question mark style of placeholder! Hence, another solution was needed. After some discussion on the dbi-users list, it was agreed that a backslash before a placeholder character would allow that character to be "escaped" and sent as-is to the database (minus the backslash). Thus, as of version 3.5.0 of DBD::Pg, the above query can be written as:

use DBD::Pg 3.5.0; $SQL = "SELECT * FROM mytable WHERE hstorecol \\? ?"); $sth = $dbh->prepare($SQL); $sth->execute('foobar'); # No error! $SQL = "SELECT * FROM mytable2 WHERE jsoncol \\? ?"); $sth = $dbh->prepare($SQL); $sth->execute('foobar'); # Still no error!

So, a fairly elegant solution. The only caveat is to beware of single and double quotes. The latter require two backslashes, of course. I recommend you always use double quotes and get in the habit of consistently using double backslashes. Not only will you thus never have to worry about single-vs-double, but it adds a nice little visual garnish to help that important backslash trick stand out a little more.

Much thanks to Tim Bunce for reporting this issue, herding it through dbi-users, and helping write the final DBD::Pg solution and code!

ByteDesigner version 2.3 is available

PostgreSQL News - 11 January, 2015 - 03:00

NipData, the provider of data modeling, presentation and integration software, is very pleased to announce the release of ByteDesigner 2.3.

See the entire news here: http://www.nipdata.com/news/

ByteDesigner is the emerging enterprise tool for designing databases at all levels of an organization, using the same concepts found in tools costing thousands of dollars!

Its a powerful database design tool, with advanced SQL Editor, and support for all major RDBMS, that can design complex systems.

Online documentation is found here: http://www.nipdata.com/faq/byte-designer-manual.html

Supported platforms (13): Linux (Intel and PowerPC), Solaris(Intel and Sparc), Windows (Intel),
Mac OS X(Intel and PowerPC)
for both 32 and 64 bit.

Databases supported (9): Oracle, ODBC, SQL92, MySQL, PostgreSQL, DB2, FireBird, MS SQL, SQLBase

gabrielle roth: PDXPUG: January Meeting addendum

Planet PostgreSQL - 10 January, 2015 - 19:49

PostgreSQL Core Team member Magnus Hagander is here in Stumptown, and will be attending next week’s meeting. Come on by & say hello!


Josh Berkus: pgDay SF 2015 schedule & registration

Planet PostgreSQL - 9 January, 2015 - 07:41

The draft schedule for pgDay SF 2015 is up, and registration is now open! Register now for a full day of PostgreSQL learning and socializing. I'm also looking for volunteers and lightning talk speakers.

We think we have a good variety of topics including PostGIS, internals, mobile applications, performance, scaling, 9.4 features and 9.5 features. So something for everyone, including a couple of guest speakers from far out of town. Registration also includes the morning keynote, lunch, and a nifty SFPostgres tshirt special for the occasion.

You can also register for the entirety of FOSS4G-NA, which includes the pgDay. If you are a member of the PostgreSQL community and want to get a full pass, contact Josh Berkus and ask about a discount code.

Really want to go but can't afford registration? I'm looking for two volunteers to help me run the pgDay. In return for helping me with toting boxes, checkin, distributing shirts, and troubleshooting, you will get a free pass for the day. Again, contact us if you can volunteer.

I am also looking for lightning talk speakers! We will be doing an end-of-day lightning talk session with 10 Postgres mini-talks.

pgDay SF 2015 is sponsored by Citus Data, 2ndQuadrant, EnterpriseDB, PostgreSQL Experts, and Google.

gabrielle roth: PDXPUG: January meeting next week

Planet PostgreSQL - 9 January, 2015 - 03:48

When: 6-8pm Thu Jan 15, 2015
Where: Iovation
Who: Gabrielle Roth
What: RDS Postgres

Amazon integrated Postgres into their general Relational Database Services (RDS) offerings July 1, 2014. RDS offers automatic failover, scalability, and easy configuration, plus Amazon handles all the patching and upgrades for you! Their SDK supports many languages and includes a robust CLI, if you want to automate management of your deployments. They also support several popular Postgres extensions.

Of course, since it’s a managed service, you can’t do everything you might expect or want to. In this talk, I’ll discuss the fun, the quirky, and the downright maddening features I’ve discovered working with production RDS deployments. Come find out what it’s like to use RDS Postgres day-to-day.

Our meeting will be held at Iovation, on the 32nd floor of the US Bancorp Tower at 111 SW 5th (5th & Oak). It’s right on the Green & Yellow Max lines. Underground bike parking is available in the parking garage; outdoors all around the block in the usual spots. No bikes in the office, sorry!

Elevators open at 5:45 and building security closes access to the floor at 6:30.

The building is on the Green & Yellow Max lines. Underground bike parking is available in the parking garage; outdoors all around the block in the usual spots.

We’ll go to Huber’s for after-meeting socializing.

See you there!


Hubert 'depesz' Lubaczewski: Changes in explain.depesz.com

Planet PostgreSQL - 9 January, 2015 - 00:17
This time the changes do not modify how the website works. But they make it simpler to deploy clone of explain.depesz.com on your own server. Thanks to Petr A. Korobeinikov I got full automation for setting test/development/internal clones of the site. What was done previously manually, now can be done very fast using Vagrant or […]

Michael Paquier: Postgres 9.5 feature highlight: New WAL format

Planet PostgreSQL - 8 January, 2015 - 09:35

Today, here is an explanation of one of the largest patches that has hit PostgreSQL 9.5 illustrated by this commit:

commit: 2c03216d831160bedd72d45f712601b6f7d03f1c author: Heikki Linnakangas <heikki.linnakangas@iki.fi> date: Thu, 20 Nov 2014 17:56:26 +0200 Revamp the WAL record format. Each WAL record now carries information about the modified relation and block(s) in a standardized format. That makes it easier to write tools that need that information, like pg_rewind, prefetching the blocks to speed up recovery, etc. 93 files changed, 3945 insertions(+), 4366 deletions(-)

Each user of PostgreSQL knows WAL as being a sort of internal journal used by the system to ensure data integrity at recovery with a set a registered REDO actions.

A 7k-patch is a lot for a feature that normal users are not really impacted by, this new WAL facility finding its interest for developers who work particularly on tools manipulating WAL as the new set of APIs available allow to easily track the relation blocks touched by a WAL record without actually needing to know the WAL record type involved by exposing the block information at a higher level.

This patch has introduced some additional infrastructure in the files xlog*.c managing WAL record insertion and decoding, and this is proving to be actually a net gain in terms of code readability in other portions of the code doing WAL insertions. For example, here is how a WAL record was roughly inserted in 9.4 and older versions, consisting in having each code paths filling in sets of XLogRecData (case of the initialization of a sequence relation):

XLogRecData rdata[2]; xlrec.node = rel->rd_node; rdata[0].data = (char *) &xlrec; rdata[0].len = sizeof(xl_seq_rec); rdata[0].buffer = InvalidBuffer; rdata[0].next = &(rdata[1]); rdata[1].data = (char *) tuple->t_data; rdata[1].len = tuple->t_len; rdata[1].buffer = InvalidBuffer; rdata[1].next = NULL; recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG, rdata);

And here is how it is changed in 9.5:

XLogBeginInsert(); XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT); XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec)); XLogRegisterData((char *) seqtuple.t_data, seqtuple.t_len); recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);

The important point in those APIs is XLogRegisterBuffer that can be used to add information related to a data block in a WAL record. Then the facility in xlogreader.c/h (already present in 9.4), particularly XLogReadRecord, can be used by either backend or frontend tools to decode the record information.

Let's see for example how this has affected pg_rewind, which is a facility able to re-sync a node with another one that has forked with it, particularly for reusing an old master and reconnect it to a promoted standby. The method used by this tool is roughly to take a diff of blocks between the two nodes since the fork point and then to copy the blocks touched back to the target. In the version based on Postgres 9.4, the block information is extracted by looking at the resource manager type and then the record type in a way similar to that:

void extractPageInfo(XLogRecord *record) { uint8 info = record->xl_info & ~XLR_INFO_MASK; /* For each resource manager */ switch (record->xl_rmid) { case RM_XLOG_ID: [...] case RM_HEAP_ID: { /* For each record type */ switch (info & XLOG_HEAP_OPMASK) { case XLOG_HEAP2_FREEZE_PAGE: /* Get block info */ foo(); [...] } } [...] } }

Maintaining such a thing is a pain because each WAL format modification, something that can happen when a new feature is introduced with the introduction of new features, or when an existing record format is changed, is impacted by that. Note that this needs a close monitoring of the commits happening in upstream PostgreSQL, because each WAL format modification is not directly mentioned in the release notes (this makes sense as this is not something users need to bother with). Now let's see how the new WAL format has affected the block information extraction in pg_rewind, which is now done with something like that:

void extractPageInfo(XLogRecord *record) { /* Handle exceptions */ [...] for (block_id = 0; block_id <= record->max_block_id; block_id++) { RelFileNode rnode; ForkNumber forknum; BlockNumber blkno; if (!XLogRecGetBlockTag(record, block_id, &rnode, &forknum, &blkno)) continue; process(); } }

Note that XLogRecord record now directly contains the number of blocks touched, the new API XLogRecGetBlockTag allowing to retrieve the relation information of those blocks. In terms of numbers, the result is a neat reduction of 600 lines of the code of pg_rewind, as well as many hours saved in maintenance for developer working on tools doing differential operation based on the analysis of the WAL content.

Greg Sabino Mullane: DBD::Pg: one ping to rule them all

Planet PostgreSQL - 8 January, 2015 - 06:37

Yellow Submarine by Wolfgang Metzner

How can you tell if your database connection is still valid? One way, when using Perl, is to use the ping() method. Besides backslash-escaped placeholders, a revamped ping() method is the major change in the recently released version 3.5.0 of DBD::Pg, the Perl/DBI interface to Postgres. Before 3.5.0, there was a chance of false positives when using this method. In particular, if you were inside of a transaction, DBD::Pg did not actually attempt to contact the Postgres backend. This was definitely an oversight, and DBD::Pg now does the right thing.

Detecting a dead backend is a little trickier than it sounds. While libpq stores some state information for us, the only way to be sure is to issue a command to the backend. Additionally, we check the value of PQstatus in case libpq has detected a problem. Realistically, it would be far better if the Postgres protocol supported some sort of ping itself, just a simple answer/response without doing anything, but there is nothing like that yet. Fortunately, the command that is issued, /* DBD::Pg ping test, v3.5.0 */, is very lightweight.

One small side effect is that the ping() method (and its stronger cousin, the pg_ping() method) will both cancel any COPY that happens to be in progress. Really, you should not be doing that anyway! :) Calling the next copy command, either pg_getline() or pg_putline(), will tell you if the connection is valid anyway. Since the copy system uses a completely different backend path, this side effect is unavoidable.

Even this small change may cause some problems for applications, which relied on the previous false positive behavior. Leaving as a basic no-op, however, was not a good idea, so check if your application is using ping() sanely. For most applications, simple exception handling will negate to use ping() in the first place.

Yann Larrivee: ConFoo 2015 – Become a Master

Planet PostgreSQL - 7 January, 2015 - 05:46

We want you to learn as much as possible during the three days of conference. We do that through quality and variety of both content and speakers, as well as creating a fun and friendly atmosphere.

We have presentations for any level, from beginner to advanced. You’ll learn about the backend and frontend, web and mobile, information systems and games, hard and soft skills, as well as many related topics.

We have speakers from many different countries and industries. They brings unique perspectives that you won’t find in your backyard. How often will you talk tech with someone from Taiwan or Saudi Arabia? How often will you learn from people who build robots or write their own programming language?

If your goal is to learn and meet interesting people, then ConFoo is the conference for you! Take advantage of our $190 early bird discount.

DBD::Pg 3.5.0 released

PostgreSQL News - 7 January, 2015 - 03:00

The latest version of DBD::Pg, the Perl driver for Postgres, has been released. This version includes a much improved ping() method, and support for the new backslash-escaped placeholders, which will make working with things such as JSON much easier. The latest version can always be found on CPAN.

Leo Hsu and Regina Obe: PLV8 binaries for PostgreSQL 9.4 windows both 32-bit and 64-bit

Planet PostgreSQL - 6 January, 2015 - 08:53

I'm still in the middle of building packages for our clients for the packages that aren't normally distributed for windows to make upgrading to PostgreSQL 9.4 smooth. One of those is PL/V8 which we use for some custom functions. I had mentioned how I build PL/V8 for PostgreSQL windows, and the instructions are a bit out of date., but I put more up to date instructions on my gist page. I tend to use gist a lot as a public scrap book with hopes someone else can learn from my toils and save them some trouble. At some point I should probably get more organized with my scrapbooks.


Continue reading "PLV8 binaries for PostgreSQL 9.4 windows both 32-bit and 64-bit"

Josh Berkus: First South Bay Postgres Meetup

Planet PostgreSQL - 5 January, 2015 - 01:19

Pivotal is hosting the first South Bay PostgreSQL Meetup we've had in quite a while. Peter will be speaking about PostgreSQL 9.4, and Mason will talk about Postgres-XL. If you live or work near Palo Alto and haven't been able to make it to the San Francisco meetups, this will be perfect for you.

Josh Berkus: NZPUG meetup in Wellington January 20th

Planet PostgreSQL - 5 January, 2015 - 00:20
Thanks to Derek Sievers of CDBaby, we've scheduled and found a venue for an NZPUG meetup in Wellington while I'm there.  The meetup will be from 6pm to 9pm on January 20th, and I will be presenting on PostgreSQL 9.4's new features.

I've created a wiki page with the details of all of the happenings in New Zealand during my Auckland/Wellington trip; please RSVP if you are attending so we can get a headcount.

Leo Hsu and Regina Obe: Updated Foreign Data Wrappers for PostgreSQL 9.3 Windows

Planet PostgreSQL - 4 January, 2015 - 01:05

As stated in last article, I've packaged FDW binaries for PostgreSQL 9.3 windows 32-bit and 64-bit and added in the ogr_fdw one. These we've tested with the standard EDB Vc++ built PostgreSQL windows installs and work fine with those.

This package is an updated list from ones we've distributed before that includes ogr_fdw and recompiled with latests source from www_fdw and file_textarray


Continue reading "Updated Foreign Data Wrappers for PostgreSQL 9.3 Windows"

Andrew Dunstan: pg_repack redux

Planet PostgreSQL - 31 December, 2014 - 19:52
I got an idea the other day about how to get over the recent problem I had encountered with pg_repack colliding with a lock request on the table made while it was copying data, but before it tried to rebuild indexes. After some considerable time with the debugger, I found that the problem is that pg_repack tries to get the index definition only at the time it builds the index, and that pg_get_indexdef() requires an Access Share lock on the table. The end result is that the lock request gets blocked waiting for pg_repack, and pg_repack gets blocked waiting for the lock requester, and there they stay. My initial solution was to move the index definition fetching to just after the place where the table definition is fetched. However, that ran into problems with existing strong locks, because it's before the brief time near the start of pg_repack's run where it obtains a strong lock on the table and cancels anything else holding such a lock. So I then moved it inside the section where it already holds a strong lock on the table. With that change, it now works both when there is an existing strong lock on the table and when there is a subsequent strong lock request on the table during the copy data phase. I've sent a pull request for this fix to the repo.

Dinesh Kumar: Heterogeneous Database Sync

Planet PostgreSQL - 30 December, 2014 - 18:00
Hi

As a part of ORACLE to PostgreSQL Migration, I come across to implement a trigger on Oracle, which sync it's data to PostgreSQL. I have tried with
a simple table as below, which is hopefully helpful to others.

Find this link to configure the heterogeneous dblink to postgres.

I believe, the below approach works effectively with the Primary Key tables of Oracle Database.
If we don't have primary key in a table, then the UPDATE,DELETE statements going to fire multiple times in Postgres, which leads performance issues.

ORACLE
CREATE TABLE test(t INT PRIMARY KEY);

CREATE OR REPLACE TRIGGER testref AFTER INSERT OR UPDATE OR DELETE ON test
FOR EACH ROW
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
C number;
N number;
BEGIN
c:=DBMS_HS_PASSTHROUGH.OPEN_CURSOR@pglink;
IF INSERTING THEN
DBMS_HS_PASSTHROUGH.PARSE@pglink(c, 'INSERT INTO test VALUES('||:NEW.t||');');
n:=DBMS_HS_PASSTHROUGH.EXECUTE_NON_QUERY@pglink(c);
ELSIF DELETING THEN
DBMS_HS_PASSTHROUGH.PARSE@pglink(c, 'DELETE FROM test WHERE t='||:OLD.t);
n:=DBMS_HS_PASSTHROUGH.EXECUTE_NON_QUERY@pglink(c);
ELSIF UPDATING THEN
DBMS_HS_PASSTHROUGH.PARSE@pglink(c, 'UPDATE test SET t='||:NEW.T||' WHERE t='||:OLD.T);
n:=DBMS_HS_PASSTHROUGH.EXECUTE_NON_QUERY@pglink(c);
END IF;
DBMS_HS_PASSTHROUGH.CLOSE_CURSOR@pglink(c);
COMMIT;
END;
/
SQL> INSERT INTO test VALUES(1);
SQL> INSERT INTO test VALUES(2);
SQL> INSERT INTO test VALUES(3);


SQL> DELETE FROM test WHERE t=2;

SQL> UPDATE test SET t=10 WHERE t=1;
SQL> COMMIT;
Commit complete.

PostgreSQL
CREATE TABLE test(t INT PRIMARY KEY);
postgres=# SELECT * FROM test;
t
----
3
10
(2 rows)

As usual, welcome your inputs.

 --Dinesh Kumar

Raghavendra Rao: Implementing Switchover/Switchback in PostgreSQL 9.3.

Planet PostgreSQL - 29 December, 2014 - 14:53
This post educates sophisticated DBA's on how to setup graceful Switchover and Switchback environment in PostgreSQL high availability. Firstly, thanks to patch authors Heikki and Fujii for making Switchover/Switchback easier in PostgreSQL 9.3.(Pardom me if I missed other names).

Let me attempt to illustrate it in short prior to these patches, all of you know Standby's are critical components in achieving fast and safe disaster recovery. In PostgreSQL, recovery concept majorly deals with timelines to identify a series of WAL segments before and after the PITR or promotion of Standby to avoid overlapping of WAL segments. Timeline ID are associated with WAL segment file names(Eg:- In $PGDATA/pg_xlog/0000000C000000020000009E segment "0000000C" is timeline ID). In Streaming Replication both Primary and Slave will follow the same timeline ID, however when Standby gets promotion as new master by Switchover it bumps the timeline ID and old Primary refuses to restart as Standby due to timeline ID difference and throw error message as:
FATAL: requested timeline 10 is not a child of this server's history
DETAIL: Latest checkpoint is at 2/9A000028 on timeline 9, but in the history of the requested timeline, the server forked off from that timeline at 2/99017E68.
Thus, a new Standby has to be built from scratch, if the database size is huge then a longer time to rebuild and for this period newly promoted Primary will be running without Standby. There's also other issue like, when Switchover happens Primary does clean shutdown, Walsender process sends all outstanding WAL records to the standby but it doesn't wait for them to be replicated before it exits. Walreceiver fails to apply those outstanding WAL records as it detects closure of connection and exits.

Today, with two key software updates in PostgreSQL 9.3, both of the issues addressed very well by authors and now Streaming Replication Standby's follow a timeline switch consistently. We can now seamlessly and painlessly switch the duties between Primary and Standby by just restarting and majorly reducing rebuild time of Standby.

Note: Switchover/Switchback not possible if WAL Archives are not accessible to both servers and in Switchover process Primary database must do clean shutdown(normal or fast mode).

To demo, lets start with setup of Streaming Replication(wiki to setup SR) which I have configured in my local VM between two clusters (5432 as Primary and 5433 as Standby) sharing a common WAL archives location, because both clusters should have complete access of sequence of WAL archives. Look at the snapshot shared below with setup details and current timeline ID for better understanding of concept.

At this stage everyone must have a solid understanding that Switchover and Switchback are planned activities.  Now SR setup in place we can exchange the duties of primary and standby as shown below:

Switchover steps:

Step 1. Do clean shutdown of Primary[5432] (-m fast or smart)
[postgres@localhost:/~]$ /opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data stop -mf
waiting for server to shut down.... done
server stopped
Step 2. Check for sync status and recovery status of Standby[5433] before promoting it:
[postgres@localhost:/opt/PostgreSQL/9.3~]$ psql -p 5433 -c 'select pg_last_xlog_receive_location() "receive_location",
pg_last_xlog_replay_location() "replay_location",
pg_is_in_recovery() "recovery_status";'
receive_location | replay_location | recovery_status
------------------+-----------------+-----------------
2/9F000A20 | 2/9F000A20 | t
(1 row)
Standby in complete sync. At this stage we are safe to promote it as Primary.
Step 3. Open the Standby as new Primary by pg_ctl promote or creating a trigger file.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ grep trigger_file data_slave/recovery.conf
trigger_file = '/tmp/primary_down.txt'
[postgres@localhost:/opt/PostgreSQL/9.3~]$ touch /tmp/primary_down.txt

[postgres@localhost:/opt/PostgreSQL/9.3~]$ psql -p 5433 -c "select pg_is_in_recovery();"
pg_is_in_recovery
-------------------
f
(1 row)

In Logs:
2014-12-29 00:16:04 PST-26344-- [host=] LOG: trigger file found: /tmp/primary_down.txt
2014-12-29 00:16:04 PST-26344-- [host=] LOG: redo done at 2/A0000028
2014-12-29 00:16:04 PST-26344-- [host=] LOG: selected new timeline ID: 14
2014-12-29 00:16:04 PST-26344-- [host=] LOG: restored log file "0000000D.history" from archive
2014-12-29 00:16:04 PST-26344-- [host=] LOG: archive recovery complete
2014-12-29 00:16:04 PST-26342-- [host=] LOG: database system is ready to accept connections
2014-12-29 00:16:04 PST-31874-- [host=] LOG: autovacuum launcher started
Standby has been promoted as master and a new timeline followed which you can notice in logs.
Step 4. Restart old Primary as standby and allow to follow the new timeline by passing "recovery_target_timline='latest'" in $PGDATA/recovery.conf file.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ cat data/recovery.conf
recovery_target_timeline = 'latest'
standby_mode = on
primary_conninfo = 'host=localhost port=5433 user=postgres'
restore_command = 'cp /opt/PostgreSQL/9.3/archives93/%f %p'
trigger_file = '/tmp/primary_131_down.txt'
[postgres@localhost:/opt/PostgreSQL/9.3~]$ /opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data start
server starting
If you go through recovery.conf its very clear that old Primary trying to connect to 5433 port as new Standby pointing to common WAL Archives location and started.
In Logs:
2014-12-29 00:21:17 PST-32315-- [host=] LOG: database system was shut down at 2014-12-29 00:12:23 PST
2014-12-29 00:21:17 PST-32315-- [host=] LOG: restored log file "0000000E.history" from archive
2014-12-29 00:21:17 PST-32315-- [host=] LOG: entering standby mode
2014-12-29 00:21:17 PST-32315-- [host=] LOG: restored log file "0000000D00000002000000A0" from archive
2014-12-29 00:21:17 PST-32315-- [host=] LOG: restored log file "0000000D.history" from archive
2014-12-29 00:21:17 PST-32315-- [host=] LOG: consistent recovery state reached at 2/A0000090
2014-12-29 00:21:17 PST-32315-- [host=] LOG: record with zero length at 2/A0000090
2014-12-29 00:21:17 PST-32310-- [host=] LOG: database system is ready to accept read only connections
2014-12-29 00:21:17 PST-32325-- [host=] LOG: started streaming WAL from primary at 2/A0000000 on timeline 14
Step 5. Verify the new Standby status.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ psql -p 5432 -c "select pg_is_in_recovery();"
pg_is_in_recovery
-------------------
t
(1 row)
Cool, without any re-setup we have brought back old Primary as new Standby.

Switchback steps:

Step 1. Do clean shutdown of new Primary [5433]:
[postgres@localhost:/opt/~]$ /opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data_slave stop -mf
waiting for server to shut down.... done
server stopped
Step 2. Check for sync status of new Standby [5432] before promoting.
Step 3. Open the new Standby [5432] as Primary by creating trigger file or pg_ctl promote.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ touch /tmp/primary_131_down.txt
Step 4. Restart stopped new Primary [5433] as new Standby.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ more data_slave/recovery.conf
recovery_target_timeline = 'latest'
standby_mode = on
primary_conninfo = 'host=localhost port=5432 user=postgres'
restore_command = 'cp /opt/PostgreSQL/9.3/archives93/%f %p'
trigger_file = '/tmp/primary_down.txt'

[postgres@localhost:/opt/PostgreSQL/9.3~]$ /opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data_slave start
server starting
You can verify the logs of new Standby.
In logs:
[postgres@localhost:/opt/PostgreSQL/9.3/data_slave/pg_log~]$ more postgresql-2014-12-29_003655.log
2014-12-29 00:36:55 PST-919-- [host=] LOG: database system was shut down at 2014-12-29 00:34:01 PST
2014-12-29 00:36:55 PST-919-- [host=] LOG: restored log file "0000000F.history" from archive
2014-12-29 00:36:55 PST-919-- [host=] LOG: entering standby mode
2014-12-29 00:36:55 PST-919-- [host=] LOG: restored log file "0000000F.history" from archive
2014-12-29 00:36:55 PST-919-- [host=] LOG: restored log file "0000000E00000002000000A1" from archive
2014-12-29 00:36:55 PST-919-- [host=] LOG: restored log file "0000000E.history" from archive
2014-12-29 00:36:55 PST-919-- [host=] LOG: consistent recovery state reached at 2/A1000090
2014-12-29 00:36:55 PST-919-- [host=] LOG: record with zero length at 2/A1000090
2014-12-29 00:36:55 PST-914-- [host=] LOG: database system is ready to accept read only connections
2014-12-29 00:36:55 PST-929-- [host=] LOG: started streaming WAL from primary at 2/A1000000 on timeline 15
2014-12-29 00:36:56 PST-919-- [host=] LOG: redo starts at 2/A1000090
Very nice, without much time we have switched the duties of Primary and Standby servers. You can even notice the increment of the timeline IDs from logs for each promotion.

Like others all my posts are part of knowledge sharing, any comments or corrections are most welcome. :)

--Raghav

Leo Hsu and Regina Obe: Foreign Data Wrappers for PostgreSQL 9.4 Windows

Planet PostgreSQL - 28 December, 2014 - 08:30

As stated in last article, I've packaged FDW binaries for PostgreSQL 9.4 windows 32-bit and 64-bit and added in the ogr_fdw one. These we've tested with the standard EDB VS built PostgreSQL windows installs and work fine with those.


Continue reading "Foreign Data Wrappers for PostgreSQL 9.4 Windows"

Leo Hsu and Regina Obe: OGR foreign data wrapper on Windows first taste

Planet PostgreSQL - 28 December, 2014 - 00:26

This christmas I received something very special from Paul Ramsey and Even Roualt as detailed in Foreign Data Wrappers for PostGIS. It's been something I've been patiently waiting for for 4 years. I think it has a few issues I'm working to replicate, but overall it's much faster than I expected and pretty slick.

So why is ogr_fdw so special, because GDAL/OGR is an avenue to many data sources, NOT JUST GEOSPATIAL. It's the NOT JUST that I am most excited about. Though the focus is geospatial you can use it with non-geospatial datasources, as we described a long time ago in OGR2OGR for data loading


Continue reading "OGR foreign data wrapper on Windows first taste"

Abdul Yadi: Call Memory Cache Functionality from Within Stored Procedure (Part 2)

Planet PostgreSQL - 27 December, 2014 - 09:06

Cache Invalidation
My previous post with the same title leaves a question: how to invalidate cache and deal with race condition. Colin ‘t Hart suggested to implement “on commit trigger” which is not available in PostgreSQL currently.

In a good book “PostgreSQL Server Programming” by Jim Mlodgenski, Hannu Krosing and Kirk Roybal, there is a statement that we can register C-language function to be called on COMMIT or ROLLBACK events.

I have an idea to have such a C function by which I can store keys for invalidation and later call pgmemcache’s memcache_delete in respond to COMMIT event:

CREATE OR REPLACE FUNCTION mcache_delete(IN t_keys text[])
RETURNS void
AS '$libdir/mcache', 'mcache_delete' LANGUAGE C VOLATILE;
psql=# begin;
psql=# ... some work ...
psql=# select mcache_delete(ARRAY['dcb50cad', '1f19bee7']); --store keys for invalidation
psql=# ... other work ...
psql=# select mcache_delete(ARRAY['91f29028']); --store other keys for invalidation
psql=# commit; --delete memcache keys

My code:

#include <postgres.h>
#include <fmgr.h>
#include <utils/array.h>
#include <utils/builtins.h>
#include <catalog/pg_type.h>
#include <access/xact.h>
#include <utils/memutils.h>

#ifdef PG_MODULE_MAGIC
PG_MODULE_MAGIC;
#endif

//declare pgmemcahe functionality
extern Datum memcache_delete(PG_FUNCTION_ARGS);

void _PG_init(void);
void _PG_fini(void);
static void mcache_xact_callback(XactEvent event, void *arg);
static void mcache_reset(void);

static int mcache_count;
static text** mcache_keys;

PG_FUNCTION_INFO_V1(mcache_delete);
Datum mcache_delete(PG_FUNCTION_ARGS);

Datum mcache_delete(PG_FUNCTION_ARGS) {
  ArrayType* keys;
  Datum* elems;
  bool* nulls;
  int n, count, i;
  MemoryContext oldCtx;

  if(PG_NARGS() != 1)
    elog(ERROR, "1 argument expected");

  if( !PG_ARGISNULL(0) ){
    keys = PG_GETARG_ARRAYTYPE_P(0);
    deconstruct_array(
    keys, TEXTOID, -1, false, 'i',
    &elems, &nulls, &n);
    if(n>0){
      oldCtx = MemoryContextSwitchTo(TopTransactionContext);
      count = mcache_count + n;
      if(mcache_count == 0)
        mcache_keys = (text**)palloc(n * sizeof(text*));
      else
        mcache_keys = (text**)repalloc( (void*)mcache_keys, count * sizeof(text*));
      for(i=mcache_count; i<count ; i++)
        mcache_keys[i] = nulls[i] ? NULL : DatumGetTextPCopy(elems[i]);
      mcache_count = count;
      MemoryContextSwitchTo(oldCtx);
    }

  }

  PG_RETURN_VOID();
}

void _PG_init(void) {
  mcache_reset();
  RegisterXactCallback(mcache_xact_callback, NULL);
}

void _PG_fini(void) {
  mcache_reset();
}

static void mcache_xact_callback(XactEvent event, void *arg) {
  int i;
  if( event == XACT_EVENT_COMMIT ) {//commit
    for(i=0; i<mcache_count; i++)
      if(mcache_keys[i])
        DirectFunctionCall1(//call pgmemcache function
          memcache_delete,
          PointerGetDatum(mcache_keys[i]) );
    //note: mcache_keys are allocated in TopTransactionContext
    //and implicitly released by internal Postgresql framework
    mcache_reset();
  } else if ( event == XACT_EVENT_ABORT ) //rollback
    mcache_reset();
}

static void mcache_reset(void) {
  mcache_count=0;
  mcache_keys=NULL;
}

Syndicate content