You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add option to put "database_prefix" when import is mode is "from_s3". (#173)
* Add option to put "database_prefix" when import is mode is "from_s3".
Using this approach its possible to import between multiple accounts and
add a database prefix.
* Modifications to merge #66
* Reverted license header
* Removed unused codes and fixed indent
* Added --database-prefix parameter for from-s3 mode in README
---------
Co-authored-by: daniloffantinato <[email protected]>
Copy file name to clipboardExpand all lines: utilities/Hive_metastore_migration/README.md
+1Lines changed: 1 addition & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -245,6 +245,7 @@ as an Glue ETL job, if AWS Glue can directly connect to your Hive metastore.
245
245
- `--database-input-path`set to the S3 path containing only databases. For example: `s3://someBucket/output_path_from_previous_job/databases`
246
246
- `--table-input-path`set to the S3 path containing only tables. For example: `s3://someBucket/output_path_from_previous_job/tables`
247
247
- `--partition-input-path`set to the S3 path containing only partitions. For example: `s3://someBucket/output_path_from_previous_job/partitions`
248
+
- `--database-prefix` (optional) set to a string prefix that is applied to the database name created in AWS Glue Data Catalog. You can use it as a way to track the origin of the metadata, and avoid naming conflicts. The default is the empty string.
248
249
249
250
Also, because there is no need to connect to any JDBC source, the job doesn't
0 commit comments