Implementing a BQRB report
Learn how to implement a custom BQRB report.
Introduction
So far you have seen how you can use the Minka hosted BQRB to generate preexisting reports. In this tutorial you will get to install your own BQRB and write your own reports.
Prerequisites
We will use the BigQuery Reporting Bridge to generate a report. In order to do this, you will need to get the BQRB Docker image from Minka.
To expose the bridge to Ledger, we will use localtunnel.
You will also need a GCP project with BigQuery working as well as the correct permissions set up.
Finally, you will need the example dataset and import it into BigQuery. In a real-world scenario, you will need to set up a data sync from your ledger into your reporting project.
This tutorial is an a part of the Reporting Learning Path so you should be familiar with the rest of the content there before proceeding. The following steps assume that you already have a Ledger set up according to How to set up an RTP Ledger .
Running BQRB
First we will need to create a signer for the bridge:
$ minka signer create
? Handle: bqrb-signer
? Key pair source: Generate new key pair
? Add custom data? No
? Signer password: [hidden]
? Repeat password: [hidden]
Signer bqrb-signer saved locally.
? Store to ledger? Public key only
? Schema: service
? Signer: owner
? Signer password for owner [hidden]
Signer summary:
---------------------------------------------------------------------------
Handle: bqrb-signer
Public: 8BxNFIPAQBRd4kEQkXm3KjlRHMzb1i14zdsIhWOP9MA=
Access rules:
#0
- Action: any
- Signer:
- public: aeblUjUYsPlYsWzWfajZIExO2Dz5NVbX2RRDpiiTwrc=
Status: created
⚠️ WARNING:: Secret or private key is critical data that should be handled
with care. Private keys are used to modify balances and it is important to
understand that anyone who has access to that key can perform sensitive
ledger operations.
✅ Signer created successfully:
✅ Signer stored to ledger
Signer summary:
---------------------------------------------------------------------------
Handle: bqrb-signer
Public: 8BxNFIPAQBRd4kEQkXm3KjlRHMzb1i14zdsIhWOP9MA=
Access rules:
#0
- Action: any
- Signer:
- public: 8BxNFIPAQBRd4kEQkXm3KjlRHMzb1i14zdsIhWOP9MA=
⚠️ WARNING:: Secret or private key is critical data that should be handled
with care. Private keys are used to modify balances and it is important to
understand that anyone who has access to that key can perform sensitive
ledger operations.
You can get the public and private keys by running:
$ minka signer show -s bqrb-signer
? Signer password for bqrb-signer [hidden]
Signer summary:
---------------------------------------------------------------------------
Handle: bqrb-signer
Public: 8BxNFIPAQBRd4kEQkXm3KjlRHMzb1i14zdsIhWOP9MA=
Secret: Xee1JFWzPxpaeNfD/EnhJKGxP8mnjCNijLvCVhQmb4A=
⚠️ WARNING:: Secret or private key is critical data that should be handled
with care. Private keys are used to modify balances and it is important to
understand that anyone who has access to that key can perform sensitive
ledger operations.
Create a file named .env
and populate it with the signer information above
as well as your ledger and GCP project information:
BRIDGE_HANDLE=bqrb-signer
BRIDGE_PUBLIC_KEY=8BxNFIPAQBRd4kEQkXm3KjlRHMzb1i14zdsIhWOP9MA=
BRIDGE_SECRET_KEY=Xee1JFWzPxpaeNfD/EnhJKGxP8mnjCNijLvCVhQmb4A=
BRIDGE_KEY_FORMAT=ed25519-raw
LEDGER_SERVER=https://bqrb-example.ldg-stg.one/api/v2
REPORTING_LOCATION=US
REPORTING_PROJECT=minka-ledger-stg
REPORTING_BUCKET=ledger-reports-stg
REPORTING_DATASET=bqrb_reporting
REPORTING_PROCEDURE=generate_report
PORT=3002
LOGGER_LEVEL=log
MOCK_BIGQUERY_CLIENT=false
To get the image, run:
$ docker pull \
--platform linux/amd64 \
gcr.io/minka-ledger-stg/bridge-bqrb_app:v2.28.8-stg
You can run the image by executing:
$ docker run \
--platform linux/amd64 \
-p 3002:3002 \
--env-file .env \
-v ~/.config/gcloud/application_default_credentials.json:/root/.config/gcloud/application_default_credentials.json \
gcr.io/minka-ledger-stg/bridge-bqrb_app:v2.28.8-stg
We are passing our local gcloud login to the container here, feel free to use another means to authenticate with gcloud.
At this point you will want to expose the bridge to Ledger, we will use localtunnel to do that. You can install it by running:
$ npm install -g localtunnel
To run localtunnel, execute:
$ lt --port 3002
your url is: https://stupid-taxis-lead.loca.lt
You will need to update your bridge record when this URL changes. You can do that
by running minka bridge update own-bqrb
.
Importing test data
If you don't already have your test data imported, you will need the file located here:
report.csv
. You can then run the following command:
$ bq load --source_format=CSV --autodetect "minka-ledger-stg:bqrb_reporting.transactions" report.csv
Ledger setup
To set up your ledger for the BQRB bridge, follow How to set up BigQuery Reporting Bridge.
We will now also set up a schema for our report:
$ minka schema create
? Handle: test-report
? Record: report
? Add custom data? No
? What would you like to use to enter the schema: My default editor
? Enter schema content: Received
? Signer: owner
? Signer password for owner [hidden]
✅ Schema created successfully:
Schema summary:
---------------------------------------------------------------------------
Handle: test-report
Record: report
Format: json-schema
Schema content:
---------------------------------------------------------------------------
{
"description": "Create Test report",
"properties": {
"custom": {
"additionalProperties": false,
"properties": {
"account": {
"title": "Account",
"type": "string"
}
},
"required": [
"account"
],
"title": "",
"type": "object"
}
},
"required": [
"custom"
],
"title": "Test report",
"type": "object"
}
Luid: $sch.-147wbaXgwsP2mH_X
By creating a report now, we will get an error because we still have not created the BQ storec procedure:
$ minka report create
? Handle: d0EdK1l8t2BImFfeqAhoy
? Schema: test-report
? Add custom data? Yes
? Field class: string
? Field title: account
? Field value: 1001001001
? What do you want to do next? Finish adding fields
? Signer: owner
? Signer password for owner [hidden]
✅ Report created successfully:
Report summary:
---------------------------------------------------------------------------
Handle: d0EdK1l8t2BImFfeqAhoy
Schema: test-report
Custom:
- account: test
Status: created
Access rules:
#0
- Action: any
- Signer:
- public: aeblUjUYsPlYsWzWfajZIExO2Dz5NVbX2RRDpiiTwrc=
Handle: owner
Public: aeblUjUYsPlYsWzWfajZIExO2Dz5NVbX2RRDpiiTwrc=
In the docker container logs we can see:
...
[Nest] 1 - 2025-10-08T17:39:14.020Z ERROR [ReportingService] Error while generating report: Procedure is not found: bqrb_example.generate_report at [1:6] +409ms
...
We will now continue with creating the stored procedure.
Report generation flow
This is the high level flow of report generation using BQRB:
- Ledger sends report record to BQRB
- BQRB performs basic validation
- BQRB calls BQ stored procedure with ledger handle and report record
- BQ stored procedure generates report
- BQ stored procedure saves report to correct GCS location
- BQ stored procedure returns proof custom
- BQRB signs proof and sends it to Ledger
BQRB - BQ interface
In its most basic form, the BQRB - BQ interface is pretty simple. After receiving
the report-created
event from Ledger and performing basic validations like checking that
Ledger signed the record, BQRB sends the ledger handle and report record to a BQ stored
procedure.
The BQ stored procedure returns the proof custom data that BQRB will then sign and send back to Ledger.
Ledger will retry getting the result from the created job in case of timeout and in edge cases BQ may receive multiple calls to generate the same report.
In case of errors in BQ or in general, BQRB may also add a rejected
proof itself.
Generating a report
Validating input
When the BQ stored procedure gets triggered, it will receive the ledger handle and report record.
It is then responsible to first validate the report record. This mostly means validating the
arguments of the report - their existence, type, range, allowed values and so on. In case
that one or the arguments is not valid, the stored procedure should respond with a rejected
proof.
In case you want to handle creation of multiple different reports, you will need to add logic to this procedure that handles that, potentially calling other stored procedures. Of course, you are free to save metadata about the execution and other data into your own tables as you please.
Generating the report
After the parameters of the report are validated, the procedure should generate the report. For this step you will usually want to use data exported from Ledger and imported into BQ. In this case, we will use mock data that we imported earlier in the tutorial.
Since BQRB is multi-ledger, you should always filter your data by ledger. In addition, you
should always filter by meta.domain
of the report record because a report created in
a certain domain only has access to data from that domain. The only exception are common
records like symbols.
While generating a report, multiple assets can be created and attached to the same report.
Keep in mind that the first CSV or JSON asset attached to the report will be displayed in the Studio preview screen. You can use that to display a part of data to the users, aggregated data or just metadata from generating the report.
Saving report to GCS
At this point you should save the report to the correct GCS location for which you will need the proper permissions.
When exporting files, keep the Limitations below in mind.
The correct location is determined from the report record and is implemented in this function:
CREATE OR REPLACE FUNCTION bqrb_example.generate_report_prefix(ledger STRING, report JSON)
RETURNS STRING
LANGUAGE js AS """
const domain = report.meta.domain
const schema = report.data.schema
const luid = report.luid
return (
`ledgers/${ledger}` +
(domain && domain.length > 0 ? `/domains/${domain}` : '') +
(schema && schema.length > 0 ? `/schemas/${schema}` : '') +
`/reports/${luid}` +
`/assets`
)
""";
BigQuery requires that the filename contains a *
symbol in order to be able to add
numbers to the filename in case more than one file needs to be generated. You can use
the following function to help with that:
CREATE OR REPLACE FUNCTION bqrb_example.convert_to_export_filename(fullFilename STRING)
RETURNS STRING
LANGUAGE js AS """
const STAR_STRING = '_*'
const EXTENSION_DELIMITER = '.'
const PATH_DELIMITER = '/'
const pathEnd = fullFilename.lastIndexOf(PATH_DELIMITER)
const path = fullFilename.substring(0, pathEnd)
const filename = fullFilename.substring(pathEnd + 1)
const extensionLocation = filename.lastIndexOf(EXTENSION_DELIMITER)
const name = extensionLocation > 0 ? filename.substring(0, extensionLocation) : filename
const extension = extensionLocation > 0 ? filename.substring(extensionLocation + 1) : ''
return path + PATH_DELIMITER + name + STAR_STRING + EXTENSION_DELIMITER + extension
""";
The easiest way to save the result to GCE is to use the EXPORT DATA
statement. Besides
the uri
, you will want to specify the format
and set overwrite=true
.
Saving assets into Ledger
When report generation is finished, you should respond to Ledger with a JSON that corresponds
to the proof custom data that will be send to Ledger. The handle of the file should be the same
as the filename in GCS. This means that you will be creating a JSON with status completed
and
the assets array.
Handling errors
There are two types of errors, the ones that you want the BQRB to save into the report and that should be visible to the users of Ledger and the ones that should not.
The errors that should be visible to users should be handled by responding with a proof
with status rejected
with details added.
Errors that we do not want to show to Ledger users should use the BQ RAISE
statement, e.g.:
RAISE USING message = 'Description of the error';
Limitations
At the moment there is a limitation that will be removed in future versions. When the BQ stored procedure exports an asset, it is possible that multiple files will be created in GCS. BQRB currently does not support that, and it is allowed to export just one file from BQ per asset. This means that BQ needs to use a workaround to ensure that just one file is generated, and it also means that the maximum asset size is 1 GB. Multiple assets can still be created.
The workaround used in the example below is the LIMIT 9223372036854775807
. This will
force BQ to store all data from a single node in order to guarantee the limit and will
therefore create just a single file. There are other ways of achieving the same thing.
Example report
Besides creating the above two functions, create the following stored procedure in your reporting dataset.
CREATE OR REPLACE PROCEDURE bqrb_example.generate_report(ledger STRING, report JSON)
BEGIN
DECLARE filename STRING;
DECLARE full_filename STRING;
DECLARE proof_custom STRING;
DECLARE location STRING;
DECLARE bucket STRING;
DECLARE export_filename STRING;
DECLARE cnt INT64;
-- Generate report.
CREATE TEMP TABLE result AS
SELECT * FROM bqrb_example.transactions
WHERE CAST(account AS STRING) = JSON_VALUE(report.data.custom.account);
-- In case the special value 'error' is used as the account parameter when creating
-- the report record, we just fail report generation, but don't log this into the report record.
-- This is done just for demonstration purposes and would not be part of an actual report.
IF JSON_VALUE(report.data.custom.account) = 'error' THEN
RAISE USING message = 'Error that is not logged';
END IF;
-- You may not want to fail report generation in case of an unknown account, this is just an example.
SET cnt = (SELECT COUNT(*) FROM result);
IF cnt = 0 THEN
SET proof_custom = '{"status": "rejected", "reason": "bridge.unexpected-error", "detail": "Unknown account"}';
SELECT PARSE_JSON(proof_custom) AS proof_custom;
RETURN;
END IF;
-- Prepare export_filename.
SET bucket = 'ledger-reports-stg';
SET location = 'gs://' || bucket || '/' || bqrb_example.generate_report_prefix(ledger, report);
SET filename = 'report.csv';
SET full_filename = location || '/' || filename;
SET export_filename = bqrb_example.convert_to_export_filename(full_filename);
-- Export data and successfully complete report.
EXPORT DATA
OPTIONS(
uri=(export_filename),
format='CSV',
overwrite=true)
AS
SELECT * from result
LIMIT 9223372036854775807;
-- Save assets into report record.
SET proof_custom = '{"status": "completed", "assets": [{"handle": "' || filename || '", "output": "' || full_filename || '"}]}';
SELECT PARSE_JSON(proof_custom) AS proof_custom;
END;
Going through the example, you will see that a test report was implemented to demonstrate most of the features mentioned above.
If you create a report with a valid account number like 1001001001
, a report will be generated
as you would expect. If you instead provide an invalid account number like test
, the report
will fail and log Unknown account
. Finally, if you specify error
as the account, this will
produce an error that is not logged, but the report will still be rejected
.
Conclusion
We have instantiated our very own BQRB instance and created a report using just SQL in BigQuery. This is the easiest way to create reports, but you will need to set up a data sync in order to get all the data you will need.
In case you don't want to deal with hosting BQRB yourself, you can use Minka's BQRB instances along with preexisting reports.