Oct 19, 19:01 PDT
We completed processing all of the queued streaming import requests. Streaming imports as well as queries are running without delay now.
Oct 17, 20:17 PDT
We've finished upgrading our RDS instance type.
Our estimate is it will take another 2 hours and 30 minutes to catch up for all the customers.
Oct 17, 18:42 PDT
We're upgrading the instance type of RDS to resolve this issue without any downtime and data lost
Oct 17, 18:15 PDT
Starting query also has been delayed because of the degraded performance of our backend DB. Sorry for the inconvenience.
Oct 17, 15:46 PDT
We've deployed multiple fixes to mitigate the streaming import delay, and observing the decrease of import queue. All queued chunks are getting imported.
NOTE: Again, there's NO data loss occurred. We're receiving all the data, but this is an internal processing delay.
Oct 17, 14:57 PDT
We are still observing the import delay and working on the fix.
Oct 17, 11:14 PDT
We're adjusting the configuration and the number of import worker instances to catch up the delay. Current delay is 4400 seconds.
Oct 17, 07:50 PDT
The cause of this delay was temporary database lock contention.
Oct 17, 06:11 PDT
We're observing import delay by degraded backend DB performance. Now we're investigating the cause.
NOTE: There's NO data loss occurred. All the data received on the cloud side is buffered safely. It's taking time to move data from buffer to customer's database / tables.
Oct 17, 05:49 PDT