Skip to content

Commit

Permalink
update
Browse files Browse the repository at this point in the history
  • Loading branch information
tilo committed Jul 8, 2024
1 parent d9d0d9b commit 0b7cfd2
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 1 deletion.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ Or install it yourself as:
* [Value Converters](docs/value_converters.md)

# Articles
* [Parsing CSV Files in Ruby with SmarterCSV](https://tilo-sloboda.medium.com/parsing-csv-files-in-ruby-with-smartercsv-6ce66fb6cf38)
* [Processing 1.4 Million CSV Records in Ruby, fast ](https://lcx.wien/blog/processing-14-million-csv-records-in-ruby/)
* [Speeding up CSV parsing with parallel processing](http://xjlin0.github.io/tech/2015/05/25/faster-parsing-csv-with-parallel-processing)
* [The original post](http://www.unixgods.org/Ruby/process_csv_as_hashes.html) that started SmarterCSV
Expand Down
2 changes: 1 addition & 1 deletion docs/batch_processing.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ and how the `process` method returns the number of chunks when called with a blo
n = SmarterCSV.process(filename, options) do |chunk|
# we're passing a block in, to process each resulting hash / row (block takes array of hashes)
# when chunking is enabled, there are up to :chunk_size hashes in each chunk
MyModel.collection.insert( chunk ) # insert up to 100 records at a time
MyModel.insert_all( chunk ) # insert up to 100 records at a time
end

=> returns number of chunks we processed
Expand Down

0 comments on commit 0b7cfd2

Please sign in to comment.