- #1
Eclair_de_XII
- 1,083
- 91
- TL;DR Summary
- Basically, I wrote a bunch of code that web-scraped data off the internet into some objects, and stored the objects by writing them into .csv and .txt files. Is there any advantage towards storing them into .json or .pkl files, as opposed to the file types thus mentioned? I mean, I am vaguely aware that the code to store the objects into the former is much less complicated than any code used to store them into the latter. But for example, is loading objects from .json or .pkl files faster?
I'm figuratively beating myself up for not knowing about these modules when I went to write my modules. On one hand, I feel like it would be a giant hassle to rewrite them just to implement some module. One of these modules is over a thousand lines long, which might be inconsequential to professional coders, but is certainly an accomplishment for an amateur. Rewriting that specific module alone just makes me not look forward to the task that might improve how efficiently it runs; a silly thought, I should think, considering Python programs generally run slower than programs written in compiled languages such as C/++/#. On the other, I feel like it would be good practice to get into the habit of using efficient data-storage techniques.
I am aware that I've made it clear that I definitely do not want to go through with this task, and I am aware that no one is forcing me to do so except for myself. But my question remains: Would loading objects from data-persistence files be faster or slower than loading them from normal text files?
I am aware that I've made it clear that I definitely do not want to go through with this task, and I am aware that no one is forcing me to do so except for myself. But my question remains: Would loading objects from data-persistence files be faster or slower than loading them from normal text files?