Skip to content

Paging Results

December 26, 2007


1. Restrict the query output at database end only

2. Read the entire result set and which iterating it cache each of them as serilizable object in a medium(File) and then apply paging principal on the cache.   [Cache here could be a File which stores Serialized Persitant Object or in a Embedded Data Base(Something like HSQL,JavaDB & others).]..

An in-memory database (such as TimesTen) is very, very fast and can act as a queryable cache for a back end database.

3.   CachedRowSet , CachedRowSetImpl ,

CachedRowSet is a rowset implementation where the rows are cached and the rowset does not have a live connection to the database (disconnected). es.   The difference is that the cursor for crs can be moved forward, backward, or to a particular row even if the cursor for rs can move only forward. In addition, crs is updatable even if rs is not because by default, a CachedRowSet object is both scrollable and updatable.

a disconnected set of rows that are being cached outside of a data source. Being thin and serializable, it can easily be sent across a wire,  sending data to a thin client. However, a CachedRowSet object does have a limitation: It is limited in size by the amount of data it can store in memory at one time.

Examples for Option 1:


select * from tableName where <condition> LIMIT ?,?

select * from tableName where <condition> and rownum between ? and ?

SELECT * FROM ( SELECT *, ROW_NUMBER() OVER (ORDER BY name) as row FROM sys.databases ) a WHERE row > ? and row <= ?


setFetchSize(int) defines the number of rows that will be read from the database when the ResultSet needs more rows.

The method in the java.sql.Statement interface will set the ‘default’ value for all the ResultSet derived from that Statement;

the method in the java.sql.ResultSet interface will override that value for a specific ResultSet. Since database fetches can be expensive in a networked environment, fetch size has an impact on performance.

setMaxRows(int) sets the limit of the maximum nuber of rows in a ResultSet object. If this limit is exceeded, the excess rows are “silently dropped”. That’s all the API says, so the setMaxRows method may not help performance at all other than to decrease memory usage. A value of 0 (default) means no limit.

<!– /* Font Definitions */ @font-face {font-family:PMingLiU; panose-1:2 1 6 1 0 1 1 1 1 1; mso-font-alt:新細明體; mso-font-charset:136; mso-generic-font-family:auto; mso-font-format:other; mso-font-pitch:variable; mso-font-signature:1 134742016 16 0 1048576 0;} @font-face {font-family:”Cambria Math”; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:1; mso-generic-font-family:roman; mso-font-format:other; mso-font-pitch:variable; mso-font-signature:0 0 0 0 0 0;} @font-face {font-family:”\@PMingLiU”; panose-1:0 0 0 0 0 0 0 0 0 0; mso-font-charset:136; mso-generic-font-family:auto; mso-font-format:other; mso-font-pitch:variable; mso-font-signature:1 134742016 16 0 1048576 0;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:””; margin:0in; margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:”Times New Roman”,”serif”; mso-fareast-font-family:PMingLiU; mso-fareast-theme-font:minor-fareast;} .MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:PMingLiU; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:”Times New Roman”; mso-bidi-theme-font:minor-bidi;} @page Section1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in; mso-header-margin:.5in; mso-footer-margin:.5in; mso-paper-source:0;} div.Section1 {page:Section1;} –>
/* Style Definitions */
{mso-style-name:”Table Normal”;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
Scrollable ResultSet: However the scrollable resultset does consume a lot of resources and is slower than a non-scrollable resultset.  Thus the saving in memory is lost on the turn around time.

Sometimes an executed query returns too many results. For instance, maybe we’re displaying a list of customers on a web page. The web page can display only so many customers, and maybe there are thousands or even millions of customers in the database. The Query API has two built-in functions to solve this type of scenario: setMaxResults( ) and setFirstResult( ):

public List getCustomers(int max, int index) {
Query query = entityManager.createQuery(“from Customer c”);
return query.setMaxResults(max).
getResultList( );
The getCustomers( ) method executes a query that obtains all customers from the database. We limit the number of customers it returns by using the setMaxResults( ) method, passing in the max method parameter. The method is also designed so that you can define an arbitrary set of results that you want returned by the execution of the query. The setFirstResult( ) method tells the query what position in the executed query’s result set you want returned. So, if you had a max result of 3 and a first result of 5, customers 5, 6, and 7 would be returned. We set this value in the getCustomers( ) method with the index parameter. Let’s take this method and write a code fragment that lists all customers in the database:

List results;
int first = 0;
int max = 10;

do {
results = getCustomers(max, first);
Iterator it = results.iterator( );
while (it.hasNext( )) {
Customer c = (Customer) );
System.out.println(c.getFirstName() + ” ” + c.getLastName( ));
entityManager.clear( );
first = first + results.getSize( );
} while (results.size( ) > 0);

In this example, we loop through all customers in the database and output their first and last names to the system output stream. If we had thousands or even millions of customers in the database, we could quickly run out of memory, as each execution of the getCustomers( ) method would return customers that were still managed by the entity manager. So, after we are finished outputting a block of customers, we call EntityManager.clear( ) to detach these customers and let them be garbage-collected by the Java VM. Use this pattern when you need to deal with a lot of entity objects within the same transaction.


For example, lets say there are 1000 employee records in the EMPLOYEES table which has the columns: empid, first_name and last_name. In order to retreive the first 200 rows, in java you can say


Connection dbConn = db.getConnection();
Statement stmt = dbConn.createStatement();
String sql = “select empid, first_name, last_name from employees”;
ResultSet rs = stmt.executeQuery(sql);

This gets you a resultset with 200 rows and everybody is happy. You could have alternatively done that using rownum on your sql query itself without using setMaxRows() on statement like below:


Connection dbConn = db.getConnection();
Statement stmt = dbConn.createStatement();
String sql = “select empid, first_name, last_name from employees where rownum<201”;
ResultSet rs = stmt.executeQuery(sql);

Even this time, the resultset has 200 rows and everybody is happy. Now the question is, which one should you use and why. I may not be an expert in talking about this, but here are my observations…

If you are dealing with a table which has relatively less records like in thousands, then either way works just fine. But if you are dealing with potentially huge database which has like millions of records, then I guess using rownum in the sql makes more sense. This is because the impact on your Java heap space would be lesser.

According to JDBC, the implementation of setMaxRows is as given below:
The maxRows limit is set to limit the number of rows that any ResultSet can contain. If the limit is exceeded, the excess rows are silently dropped.

For example, lets say there are 14 million records in a table, from which you want to retrieve the first 3000 rows. If you use statement.setMaxRows(), the JDBC call retreives the entire 14 million records from the database table and sets the first 3000 records on the resultset object. But the retrieval of such huge number of records puts a huge lump of data on your JVM, and you are dependent on the Garbage Collector for retrieval of this space. But if you use rownum in the SQL, then the number of rows retrieved from the DB by JDBC itself is limited and there won’t be any heap space issues. I ran a trial session with our DBA and it was confirmed that the statement.setMaxRows() doesn’t modify the sql using rownum automatically. We traced the session and it was retrieving all the million records. And I came to a conclusion that the heap space issues I was facing was due to the usage of statement.setMaxRows() instead of using rownum on the SQL. Also the speed of retrieval of rows when you limit by rownum could be more, though I don’t have substantial evidence to state it as a fact.

No comments yet

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: