[Gambas-user] any way to convert Result to Collection more faster than copy?

PICCORO McKAY Lenz mckaygerhard at ...626...
Fri Jun 30 14:41:49 CEST 2017


i get more than 30 minutes, due i must parse to a low end machine, not to
your 4 cores, 16Gb ram super power machine.. i'm taking about a 1G ram and
single core 1,6GHz  atom cpu

i need to convert from Result/cursor to other due the problem of the odbc
lack of cursor/count ..

i thinking about use a sqlite memory structure, how can i force it?
documentation said "If Name is null, then a memory database is opened." for
sqlite..

so if i used a memory structure can be a good idea? *tested yesterday took
about 10 minutes but i dont know if i have a problem in my gambas
installation!*



Lenz McKAY Gerardo (PICCORO)
http://qgqlochekone.blogspot.com

2017-06-30 4:09 GMT-04:00 adamnt42 at ...626... <adamnt42 at ...626...>:

> On Thu, 29 Jun 2017 18:57:29 -0400
> PICCORO McKAY Lenz <mckaygerhard at ...626...> wrote:
>
> > can i convert directly or more faster than copy each row, a Result from
> > database to a collection or a VArian matrix?
> >
> > i'm taking about 200.000 rows in a result... the problem its that the
> odbc
> > db object support only cursor with forward only..
> >
> > so with a matrix or a collection i cant emulate the cursor behaviour
> >
> > Lenz McKAY Gerardo (PICCORO)
> > http://qgqlochekone.blogspot.com
>
> Interesting.
>
> Well the row by row copy is how we do it here. I added some quick timer
> Prints to a program we run each day to verify that the
> database updates done overnight were "clean".
> The data loaded is a fairly complex join of several tables, the
> transactional table is 754,756 rows today and the master table is 733,723
> rows long and the transactional data is compared to the master data to test
> a set of possible inconsistencies. ( The actual query returned a set of the
> transaction and master records that were actioned overnight - this
> generally returns about 5,000 to 10,000 rows - so I jigged it to return the
> pairs that were not actioned overnight thereby getting row counts of the
> sizes you are talking about.) So the jigged query just returned 556,000
> rows.  Here's the timing output.
>
> 17:05:59:706    Connecting to DB
> 17:06:00:202    Loading Data    <---- so 406 mSec to establish the db
> connection
> 17:06:31:417    556502 rows     <---- so 31,215 mSec to execute the query
> and return the result
> 17:06:31:417    Unmarshalling result started
> 17:06:44:758    Unmarshalling completed 556502 rows processed  <---  so
> 13,341 mSec to unmarshall the result into an array of structs
>
> So, it took roughly 31 seconds to execute the query and return the result
> of half a million rows.
> To unmarshall that result into the array took just over 13 seconds. The
> unmarshalling is fairly well a straight field by field copy.
> (Also I must add, I ran this on a local db copy on  my old steam driven
> laptop, 32 bits and about 1G of memory.)
>
> That's about 42 mSec unmarshalling time per row.
> I don't think that is too bad. From my perspective it is the query that is
> eating up my life, not the unmarshalling.
>
> What sort of times to you get?
>
> b
>
> (p.s. the query has been optimised until its' eyes bled. )
>
> --
> B Bruen <adamnt42 at ...3379... (sort of)>
>
> ------------------------------------------------------------
> ------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> Gambas-user mailing list
> Gambas-user at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/gambas-user
>



More information about the User mailing list