not sure if this is a question better suited for serverfault but I've been messing with amazon RDS lately and was having trouble getting 'file' privileges to my web host mysql user.
I'd assume that a simple:
grant file on *.* to 'webuser@'%';
would work but it does not and I can't seem to do it with my 'root' user as well. What gives? The reason we use load data is because it is super super fast for doing thousands of inserts at once.
anyone know how to remedy this or do I need to find a different way?
This page, http://docs.amazonwebservices.com/AmazonRDS/latest/DeveloperGuide/index.html?Concepts.DBInstance.html seems to suggest that I need to find a different way around this.
Help?
UPDATE I'm not trying to import a database -- I just want to use the file load option to insert several hundred-thousand rows at a time.
after digging around this is what we have:
mysql> grant file on *.* to 'devuser'@'%'; ERROR 1045 (28000): Access denied for user 'root'@'%' (using password: YES) mysql> select User, File_priv, Grant_priv, Super_priv from mysql.user; +----------+-----------+------------+------------+ | User | File_priv | Grant_priv | Super_priv | +----------+-----------+------------+------------+ | rdsadmin | Y | Y | Y | | root | N | Y | N | | devuser | N | N | N | +----------+-----------+------------+------------+
When importing data into a MariaDB DB instance, you can use MariaDB tools such as mysqldump, mysql, and standard replication to import data to Amazon RDS. Importing Data into PostgreSQL on Amazon RDS – You can use PostgreSQL tools such as pg_dump, psql, and the copy command to import data to Amazon RDS.
Find techniques to import data into an Amazon RDS for MySQL DB instance in the following table. Create a backup of your on-premises database, store it on Amazon S3, and then restore the backup file to a new Amazon RDS DB instance running MySQL.
Because you can’t access the underlying operating system for your database in Amazon RDS for Oracle, to automate large numbers of file uploads, we need to build a solution using Amazon Simple Storage Service (Amazon S3) and AWS Lambda to load files into Amazon RDS for Oracle storage.
The following performance guidelines apply to all Amazon RDS data import/export operations: Load and unload data in parallel using compression and multiple threads. If you're loading a large amount of data in parallel, be sure that the client machine has sufficient resources during the data load process.
The first step in the process of migrating a large amount of data to an Amazon RDS MariaDB or MySQL DB instance with minimal downtime is to create a copy of the source data. You can use the mysqldump utility to create a database backup in either SQL or delimited-text format.
You need to use LOAD DATA LOCAL INFILE
as the file is not on the MySQL server, but is on the machine you are running the command from.
As per comment below you may also need to include the flag:
--local-infile=1
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With