Skip to content

Commit

Permalink
Dynamically adjust postgres feature queue size
Browse files Browse the repository at this point in the history
Lower the default queue size, but automatically adjust it
based on how long each fetch takes. This change keeps fetching
responsive even for slow connections or databases. The current
approach with a fixed queue size can result in very slow feature
fetching, which prevents UI updates for multiple seconds.
  • Loading branch information
nyalldawson committed Mar 4, 2017
1 parent 651b5c6 commit fbe4be8
Show file tree
Hide file tree
Showing 2 changed files with 14 additions and 7 deletions.
19 changes: 14 additions & 5 deletions src/providers/postgres/qgspostgresfeatureiterator.cpp
Expand Up @@ -22,15 +22,12 @@
#include "qgsmessagelog.h"
#include "qgssettings.h"

#include <QElapsedTimer>
#include <QObject>


const int QgsPostgresFeatureIterator::FEATURE_QUEUE_SIZE = 2000;


QgsPostgresFeatureIterator::QgsPostgresFeatureIterator( QgsPostgresFeatureSource *source, bool ownSource, const QgsFeatureRequest &request )
: QgsAbstractFeatureIteratorFromSource<QgsPostgresFeatureSource>( source, ownSource, request )
, mFeatureQueueSize( FEATURE_QUEUE_SIZE )
, mFeatureQueueSize( 1 )
, mFetched( 0 )
, mFetchGeometry( false )
, mExpressionCompiled( false )
Expand Down Expand Up @@ -223,6 +220,9 @@ bool QgsPostgresFeatureIterator::fetchFeature( QgsFeature &feature )

if ( mFeatureQueue.empty() && !mLastFetch )
{
QElapsedTimer timer;
timer.start();

QString fetch = QStringLiteral( "FETCH FORWARD %1 FROM %2" ).arg( mFeatureQueueSize ).arg( mCursorName );
QgsDebugMsgLevel( QString( "fetching %1 features." ).arg( mFeatureQueueSize ), 4 );

Expand Down Expand Up @@ -258,6 +258,15 @@ bool QgsPostgresFeatureIterator::fetchFeature( QgsFeature &feature )
} // for each row in queue
}
unlock();

if ( timer.elapsed() > 500 && mFeatureQueueSize > 1 )
{
mFeatureQueueSize /= 2;
}
else if ( timer.elapsed() < 50 && mFeatureQueueSize < 10000 )
{
mFeatureQueueSize *= 2;
}
}

if ( mFeatureQueue.empty() )
Expand Down
2 changes: 0 additions & 2 deletions src/providers/postgres/qgspostgresfeatureiterator.h
Expand Up @@ -110,8 +110,6 @@ class QgsPostgresFeatureIterator : public QgsAbstractFeatureIteratorFromSource<Q

bool mIsTransactionConnection;

static const int FEATURE_QUEUE_SIZE;

private:
virtual bool providerCanSimplify( QgsSimplifyMethod::MethodType methodType ) const override;

Expand Down

5 comments on commit fbe4be8

@m-kuhn
Copy link
Member

@m-kuhn m-kuhn commented on fbe4be8 Mar 5, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm... This will take around 12 roundtrips before 2000 features are requested as before this patch.

I really think this value should be cached somewhere (connection pool?) so it doesn't need to be re-estimated for each iterator.

@nyalldawson
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't work out a reasonable place to cache this. It really needs to be in the provider, but we can't set a value in the provider from an iterator.

Re the 12 trips to get to 2000 - that's why I tested this on the best case scenario where large queue sizes are mostly likely to benefit. At least on large datasets there's no regression here...

@m-kuhn
Copy link
Member

@m-kuhn m-kuhn commented on fbe4be8 Mar 5, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about the connection pool? I think that's a quite good place with a QHash that has connInfo + sql as key.

I don't think a benchmark on a local database should be treated as representative (latency < 1ms) ;)

I'm even wondering if this will not negatively impact situations where the speed (as in volume) is good, but the latency is bad. They will stick at 1 feature / request, although they would benefit a lot from bigger queue sizes.

I think it would be great to have the possibility to show the queue size next to the measured time in the message log (like the rendering times can be shown) to get a better overview of real world examples.

@nyalldawson
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah sorry, I should stress that that benchmark was done as a way to test the best case which would benefit from a large queue. I'll be testing on a super slow server over the coming week to verify there's no regressions there either. At least from a using QGIS POV the change is day and night for the slow database.

@m-kuhn
Copy link
Member

@m-kuhn m-kuhn commented on fbe4be8 Mar 5, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it helps in your scenario, that's a really good start and worth it! I guess for 8% it's better and for 88% it's the same. What I wonder is the other 4% of the users.

What I'm trying to say is that there are different reasons for a connection being perceived as slow (latency vs bandwidth).
I think we should get some good benchmark results to tune min time / max time / default queue size - and decide if this is always good or if a queue size override needs to be available.

Please sign in to comment.