From 0b7cfd2740da838ab1cf1ffe70de58253e14c1a8 Mon Sep 17 00:00:00 2001 From: Tilo Sloboda Date: Mon, 8 Jul 2024 21:13:23 +0800 Subject: [PATCH] update --- README.md | 1 + docs/batch_processing.md | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 3648d8b..c6a214b 100644 --- a/README.md +++ b/README.md @@ -45,6 +45,7 @@ Or install it yourself as: * [Value Converters](docs/value_converters.md) # Articles +* [Parsing CSV Files in Ruby with SmarterCSV](https://tilo-sloboda.medium.com/parsing-csv-files-in-ruby-with-smartercsv-6ce66fb6cf38) * [Processing 1.4 Million CSV Records in Ruby, fast ](https://lcx.wien/blog/processing-14-million-csv-records-in-ruby/) * [Speeding up CSV parsing with parallel processing](http://xjlin0.github.io/tech/2015/05/25/faster-parsing-csv-with-parallel-processing) * [The original post](http://www.unixgods.org/Ruby/process_csv_as_hashes.html) that started SmarterCSV diff --git a/docs/batch_processing.md b/docs/batch_processing.md index a95a059..6bd392e 100644 --- a/docs/batch_processing.md +++ b/docs/batch_processing.md @@ -58,7 +58,7 @@ and how the `process` method returns the number of chunks when called with a blo n = SmarterCSV.process(filename, options) do |chunk| # we're passing a block in, to process each resulting hash / row (block takes array of hashes) # when chunking is enabled, there are up to :chunk_size hashes in each chunk - MyModel.collection.insert( chunk ) # insert up to 100 records at a time + MyModel.insert_all( chunk ) # insert up to 100 records at a time end => returns number of chunks we processed