We were implementing a bulk-data transfer from SAP into an Oracle database. Easily upwards of 15 000 materials were being uploaded to the database via a stored procedure (company policy). Normally we like this approach because it decouples PI from the database’s underlying table structure, but we were getting terrible performance.
In testing, the entire workflow took almost 2 hours. Whilst this in itself wasn’t an issue (the process runs in the middle of the night), it was unnecessary load on both systems, and the extended duration put the process at increased risk of failure (e.g. due to network issues).
Keen to improve this, we looked at PI’s “batch insert” capabilities. In order to maintain the decoupling, and in order to protect the destination tables, we created an interface table to temporarily contain the material data, and a procedure that safely updated the destination table.
Testing showed a 30-to-60-fold performance improvement during the PI-DB exchange, and the entire process ended up taking just 10 minutes.
Author: Edwin