-
Notifications
You must be signed in to change notification settings - Fork 246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
We need fully automated retrieval of datasets #79
Comments
@caesar0301 , have you implemented this? Or is it |
@KOLANICH Hi, I didn't notice that you are working on it. If so, that is great! Your idea is similar to what is considered in project https://frictionlessdata.io/. When you finish your idea, we can start a new service to host your API calls. |
@KOLANICH Can you introduce your method in details? Maybe I can give you some help. In my view, there are two potential ways to get the direct links of machine-processable sources. 1. Manually appended into the yaml files; 2. Use crawling tech to to that automatically. I am fulfilling it by the second. Briefly, use data crawling tech to index dataset pages and endeavor to find original machine-processable source such as tabular/json/xml data. Then start independent processes to retrieve and update data periodically. |
Hi.
Closing issues without any hints (s.a. a message, or referencing in a commit solving the issue) about why it has been closed is just impolite.
I have already read their specs. But their specs are vendor-hosted metadata. We need a standalone one. You know it's pain to deal with people just closing ignoring ones issues and pull requests or even worse just ignoring them because they have no time for them.
I think we should start from manually-created ones (with a minor dumb script (already implemented) infering columns types from data). When we have enough large dataset of links to datasets with their hand-crafted descriptions, we can staft training models deriving columns types from textual descriptions. https://gitlab.com/KOLANICH/SurvivalDatasets is my current draft. Not very finished though and unsuitable to be used for anything for now (I currently use it to debug some survival analysis code). |
daviddiazvico/scikit-datasets#10 are strongly related |
@KOLANICH My apologies! I made a mistake to think that this issues was opened by myself long time ago and without any active response, while your response didn't address my point. Actually, this is my fault. My opened issue was here (awesomedata/awesome-public-datasets#262) |
You could use DCAT( or DCAT-AP) to describe dataset metadata, at distribution level you have accessURL and download URL |
Thanks for letting me know. |
Currently the machine-readable descriptions cointain links to the pages cointaining datasets. We need it contain direct links to the files and machine-processable instructions on how to transform that files into
pandas.DataFrame
s, so we can automate it to a single call of an API.The text was updated successfully, but these errors were encountered: