Bulk Sub Account Creation
This document describes the three‐step process for performing a bulk upload of SubAccount data via CSV. It is organized into:
Step 1: Initialize Upload
Step 2: Client‐Side File Upload
Step 3: Finalize Upload
Each step includes endpoint URLs, required headers, request/response payloads, error conditions, and example cURL commands.
Step 1: Initialize Upload Endpoint
This endpoint generates a multipart upload session in S3 and returns a list of presigned URLs (one per part) that the client can use to upload the CSV file in chunks. Because SubAccount CSVs are typically small, we default to a single part of type “.csv,” but the API supports splitting into multiple parts if desired.
HTTP Method:
Post
URL:
{baseURL}/api/subaccount/bulk/initialize
Authorization
Header
Authorization: Bearer {YOUR-ACCESS-TOKEN}
Header
x-authentication-api-key: {API-KEY}
{YOUR-ACCESS-TOKEN}
is obtained from the Authentication service.[SDK Mobile] - Generate an API Token{API-KEY}
is an API‐key string associated with an Admin role (Authority = “ADMIN”).
Request Parameters:
N/A
Request Body: N/A
Error Responses:
If you omit
x-multitenant-external-id
orx-authentication-api-key
:
{
"status": 403,
"message": "User account/subAccount is not enabled or invalid.",
"timestamp": 1749192550299
}
{
"status": 403,
"message": "User account/subAccount is not enabled or invalid.",
"timestamp": 1749192550299
}
Curl example:
curl --location --request POST '{{BASE_URL}}/api/subaccount/bulk/initialize' \
--header 'x-authentication-api-key: {{API_KEY}}' \
--header 'x-multitenant-external-id: 517ab75d-9ebd-4c41-a6e0-35a643789d04' \
--header 'Authorization: Bearer {{YOUR_ACCESS_TOKEN}}'
Response:
{
"uploadId": "dUwYW.TkDRreyN.gfgjXULmPruqPMIZqwyuBr6AmX20ugmSZwCejr3otjwxCIqFlai6fJO9B.ZTNiAZjeJKzOpVeOFAtYklr9r.qN5AVB4afMnJRr8XfNaj_dLnLdPPtp5fyQUqOznIO7xd9TjiWWgvaBLm..BWzFKbv8hVb334-",
"uploadParts": [
{
"uploadPresignedUrl": "https://…&partNumber=1&uploadId=dUwYW…",
"partNumber": 1
}
]
}
Step 2: Client-Side File Upload
After receiving the presigned URLs in Step 1, your client must upload the CSV part using HTTP PUT. The order in which you upload parts does not affect server‐side validation, but you must supply the correct partNumber when you finalize.
Upload URL
Use exactly the uploadPresignedUrl
returned for each part. This URL already includes query parameters (?partNumber=…&uploadId=…&X-Amz…
) that bind you to a specific partNumber and uploadId.
HTTP Method
PUT
Headers
Content-Type: text/csv
Because the presigned URL was generated from an
UploadPartRequest.builder().contentType("text/csv")…
, S3 will reject any PUT whoseContent-Type
is not exactlytext/csv
.You must include no other required headers. The signature, expiration, and part information are baked into the URL.
Body
Binary contents of the CSV chunk for that part.
If
amountOfParts = 1
, the “chunk” is simply your entire.csv
file.csv file structure
externalId,name,isEnabled,zipCode,address,city,state,cameraModule,noiseCancelling,transcription,playerAutoPlay,audioMuted
A example of the csv file

Curl example
curl --location
--request PUT '{uploadPresignedUrl}'
--header 'Content-Type: text/csv'
--data‐binary @"/path/to/subaccounts.csv"
Response
When you successfully upload a part via the presigned URL, S3 will respond with a 200 OK and include an ETag
header. The body is empty. For example:
HTTP/1.1 200 OK
x-amz-id-2: example-request-id
x-amz-request-id: example-request-id
Date: Tue, 01 Jun 2025 18:22:10 GMT
ETag: "94f70fbf20d6908d9f56062a6e9f8034"
Content-Length: 0
Although this multipart flow only handles one part at a time (and parts can be uploaded in any sequence), we recommend uploading them in ascending order of partNumber (1 → 2 → … → N). If a network failure interrupts a part upload, you can retry that single part any number of times before the presigned URL expires. Presigned URLs are time-limited, so once they expire you must go back to Step 1 and request a fresh uploadId
and new presigned URLs.
Step 3: Finalize Upload Endpoint
Once all parts have been successfully PUT to S3, you must call the “finalize” endpoint. This triggers S3’s CompleteMultipartUpload
and, for SubAccount, also begins server-side CSV parsing, validation, and database insertion.
URL
{baseURL}/api/subaccount/bulk/upload/finalize
Method
POST
Authorization
Header
Authorization: Bearer {YOUR-ACCESS-TOKEN}
Header
Content-Type: application/json
Curl example
curl --location --request POST '{{BASE_URL}}/api/subaccount/bulk/upload/finalize' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer {{YOUR_ACCESS_TOKEN}}' \
--data '{
"uploadId": "dUwYW.TkDRreyN.gfgj....",
"parts": [
{
"partNumber": 1,
"etag": "\"94f70fbf20d6908d9f56062a6e9f8034\""
}
]
}'
uploadId
String
The exact same uploadId you received in Step 1.
parts
Array
One object per part. Each object must include: partNumber and etag
partNumber
Integer
Part number (1 through amountOfParts).
etag
String
The ETag returned by S3 when you executed the PUT for that part. You can extract it from the response header ETag: “xyz…”.
The numbers of parts send in the finalize (step 3) much match the numbers of parts declared in the initialize (step 1), if not will get a 400 with the message for instance “Uploaded parts in S3 do not match the provided parts. Expected: [n], Provided: [n1, n2]“
Response
On success, returns HTTP 200 OK with an empty body. Simultaneously, the server will:
Invoke S3’s CompleteMultipartUpload
Read the merged CSV and perform header‐validation and per‐row validation
Create new SubAccount entities (and related SubAccountSetting entities) in the database
Update the
BulkUpload
record in PostgreSQL to one of: –COMPLETED
(if no CSV errors, and at least 1 row was created) –PARTIAL_FAILURE
(if some rows failed validation.) –FAILED
(if header validation failed entirely or no valid rows)
After this operation, you may query the database or S3 for an error report if any rows failed.
Error Handling
If the CSV’s first line (headers) does not contain all required columns (
externalId
,name
,isEnabled
,zipCode
,address
,city
,state
), the server will:Return
200 OK
immediately (CompleteMultipartUpload already succeeded), but in the background markBulkUpload.status = FAILED
.Write an error report to S3 named
{originalCsvKeyName.replace(".csv", "_error_report.txt")}
containing lines like:
/"Header: externalId – Error message: Missing required column: externalId" "Header: name – Error message: Missing required column: name" …
Set
BulkUpload.errorCount = <numberOfMissingHeaders>
.
Clients should retrieve the report from S3 to diagnose missing columns.
If any data row fails (e.g., required field blank, invalid Boolean, unknown header in “settings” section):
Server will mark
BulkUpload.status = PARTIAL_FAILURE
(as long as at least one row succeeded).Generate an error report where each line is:
"Row {rowIndex} Field: {ColumnName} – Error message: {detailedReason}"
Clients can download the error‐report file from the S3 key returned in the
BulkUpload.errorReportUrl
column.
If S3 rejects any
CompleteMultipartUpload
call (e.g., part lists don’t match), the finalize endpoint will return a400 Bad Request
with:
{
"type": "about:blank",
"title": "Bad Request",
"status": 400,
"detail": "Uploaded parts in S3 do not match the provided parts. Expected: [1,2], Provided: [1]",
"instance": "/api/subaccount/bulk/upload/finalize",
"message": "error.parts_mismatch",
"params": "uploadId"
}
Last updated