PATENT PORTFOLIO

HyperBatch Patent Portfolio

KeyGen Data’s HyperBatch method is supported by the combination of several patents within their portfolio.  They include the following:

  • Batch Processing on Structured Data (BPSD)
  • Detached Field Batch Process (DFBP)
  • Horizontal Processing of Sequential Data (HPSD)

Batch Processing on Structured Data (BPSD)

Patent # 11,386,083

Problem:

Currently there is only one method available to process data in tables in batch/sequential mode. The cursor processing method fetches data row by row, sorted by key, from one or multiple tables, applies the business rules, updates the current row, and then fetches the next row in the sequence. The cursor processing method is an extremely slow process. Tests with the cursor method indicate that the processing rate is only 10,000 to 100,000 rows per minute. At this rate, processing tens of billions of rows would be impossible within a reasonable timeframe. Accordingly, there is a need in the art for other options to process large datasets in tables. The present invention fulfills this need.

Detached Field Batch Process (DFBP)

Patent # 11,599,529

Problem:

All programs currently available that are designed to process batch transactions employ a method that was developed many decades ago. A major deficiency of this method is that the entire record set with all of its data is moved during the batch process. In principal, this can contribute significantly to a decrease in performance due to the increased I/O needed for both moving and sorting the records as the entire record must be kept with all of its data throughout the process. It is very common with large datasets that a significant portion of the record has a lot of irrelevant data, namely, data that is not needed within the business logic of the batch process.

Horizontal Processing of Sequential Data (HPSD)

Patent # 11,599,530

Problem:

Sequential data is defined as information that is stored in sequential order, where the beginning of each subsequent row of data follows the end of the preceding row of data. A current method of processing sequential data reads each row sequentially, or vertically, from the top to the bottom of the file, only processing one row of data at a time. The effect of this is that the processing of all the rows of data can take several hours when performed on large datasets, even on the most powerful computers. 

            In addition, for these datasets to be processed sequentially, all of the input files need to be sorted prior to the sequential process, as they require a given set of unique keys based on a given set of business rules that define the logic in the process. The sort process is frequently much longer than the core business logic in the sequential process. The sort also consumes a lot of resources like compute, disk space, i/o operations, and memory which can add materially to the overall cost.