Bulk insert アクション

Use the Bulk insert アクション to read rows from a CSV or text file and insert them into a target database table in batches. With this action, you can load large datasets efficiently and capture the total inserted row count.

Use the Bulk insert アクション when performing large-scale data insert operations, such as loading thousands or millions of records from a file into a database.

Settings

  • In the Session name field, enter the name of the session you used to connect to the database server in the Connect アクション. For more details, see データベースでの [接続] アクションの使用.
  • Enter the source file path to read in the Source file field.
    Note: You can specify the .csv and .txt files only.
  • In the Table name field, enter the target database table in which you want to insert the rows.
  • Enter the Delimiter value to specify how the source file columns are separated.
    Note: Comma is the default delimiter value. Other supported values include comma, tab, newline.
  • Specify the specificDelimiter. This is a custom delimiter character when Delimiter is set to Other. For example, use |.
  • In the Start row number field, enter the first data row to read from the source file.
    Note: The default value is 2 and it must be 2 or greater than the default value as the file contains a header row.
  • In the Columns mapping field, click Add mapping to map source file columns to target table columns. Ensure column names and data types match.
    • Specify the File column name that is the column header exactly as it appears in the source file for a mapping entry.
    • Enter the Table column name that matches target table column name for a mapping entry.
    Note:
    • Column mapping is optional.
    • If column mapping is not added, all columns from source file will be mapped to all columns of the target table.
    • Ensure that the target table exists in the database with the required columns. Also, verify the case sensitivity of column names for one to one mapping of fields.
    • This アクション is optimized for high-throughput scenarios and delivers significant performance benefits compared to row-by-row insertion.

    • For smaller data volumes, standard insert operations (for example, using ループ アクション) is more appropriate and simpler to configure.

  • Specifies how many rows to insert per batch in the Batch size field.
    Note: The default Batch size is 1000. It must be between 200 - 500000
  • Enter a timeout value that specifies the maximum wait time for each batch. The default value is 1800. It must be between 1 and 3600 seconds.
    Note: If a batch times out, the アクション returns the total number of rows successfully inserted. To resume processing, add 1 to this count and use that value as the Start row number for the next run.

Result: When you use this アクション, you can insert records in bulk and it returns the total number of inserted records as a Number.

Overall, this アクション performs bulk insert without requiring a ループ and runs synchronously, continuing until all rows are inserted or an error occurs—delivering significantly better performance than row-by-row insertion for large datasets.