When you have a Pega application running longer time in production you might already have a big case volume. In this time if you expose a new column to the case table and deploy to Pega the column population job triggered by the deployment could take longer time to run. This also could cause some performance issues. Is there any other way to solve this issue?
Most of the time column exposed is for reporting purpose when you query on multiple records. If you are opening a single instance using obj-open pega anyway going to fetch the data from the blob if blob column (pzPVStream) is available in the table. Also if you update a record in Pega and while saving if there is a new exposed column Pega is going to populate the column.
The column population job helps to handle the cases which are created before the column is exposed and the data is required for reporting purpose. Pega provides some db function which allows you to query directly from stream. The function can be used to populate the columns by running query instead of Pega column population job doing the job. This will be much faster as this doesn’t involve Pega. The query syntax will be something like this.
UPDATE <table name>
SET <column name> = pegadata.pr_read_from_stream('<column name>',pzInsKey,pzPVStream)
WHERE <condition>
You can use some condition based on create date time or other fields if you want to run the query in small batch. The function extract the column from the blob column and returns.
If you are populating the columns using query you can disable column population job to avoid performance impact caused by the column population job.
Tips: The same function also can be used to reports on unexposed column by running queries for one time requests. If the query needs to be executed frequently exposing the column is good idea. Also in report definition if you are referring a column it is a must to it needs to be exposed.