GCP Storage Bucket
Last updated
Last updated
Didomi can push exported data directly to a GCP storage bucket.
Before Didomi can push batch export files to your Google Cloud Platform account, you must create a storage bucket, create a service account, grant access to the service account and enable specific APIs. These accounts and APIs enable Didomi to upload files to your GCP storage buckets.
Create a storage bucket that will host the data exported by Didomi on the GCP console.
(1) Log into the GCP console, select your organization and project, and select Storage > Browser > Create bucket.
(2) Enter a bucket name and configure your bucket.
(3) Click Create
Create a Didomi Batch Export service account on the GCP console.
Didomi needs this service account to grant cloud storage bucket permission to read and write data into the storage bucket create in step 1.
(1) Log in to the GCP console, select your organization, and select IAM & Admin > Service Account > Create Service Account.
(2) Enter a Title and Description.
(3) Click Create.
Do not grant the service account access to your project as you will use Access Control Lists to give access to your bucket.
The service account security key is used for service-to-service authentication within GCP. The private key file is required to authenticate API calls between your GCP projects and Didomi.
(1) Click + CREATE KEY.
(2) Create and download a JSON key.
(3) Use the content of this JSON key to setup your batch export.
Add the service account to the Access Control Lists of the storage bucket created in step 1.
(1) Go to Storage > Storage browser.
(2) Click on the bucket created in step 1.
(3) Go to Permissions and click on Add members.
(4) Add the service account created in step 2 as a member with the roles Storage Legacy Bucket Reader and Storage Object Admin.
(5) Click Save.
If you intend to not give this role but specific permissions please make sure that both storage.objects.*
and storage.buckets.get
permissions are enabled. Otherwise our process will not be able to export the files. You will find below some details on why it needs these permissions.
storage.objects.*
: We need permissions to read, write, and delete files. This permission is only for the target export bucket.
storage.buckets.get
: We perform a get.bucket
operation as a first step to ensure that the bucket exists (before writing to it) and to get the bucket's metadata. The metadata is used to extract information like bucket location and the storageClass
(more information here) used to handle some operations in the bucket.