-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
where to obtain APN.mat file? #2
Comments
You can download apn.mat via the following GitHub link: |
Thanks,and where to get id2imagepixel.pkl ..... |
This data is too large to upload, but it is relatively easy to generate this data. |
Can you introduce the process of acquiring these pkl files in detail, or give us some reference repo in GitHub? Thanks a lot! |
https://huggingface.co/docs/transformers/model_doc/swin#transformers.SwinModel |
Got it! Thank u very much |
What's more, there still another question about the output of SwinModel to generate id2imagepixel.pkl. Which parameter I need to choose in the SwinModelOutput? (such as pooler_output/pooler_output/hidden_states/hidden_states/reshaped_hidden_states) |
pooler_output |
Thank you for the code. What I want to ask is can you give the exact steps or code to generate the id2imagepixel.pkl file? |
This is a process of generating the embedding according to the image. Please refer to the specific implementation process mentioned in this issue. Please feel free to ask further questions if you have any confuse. |
Thank you very much for sharing, I am a newbie, can you tell me the detailed steps for getting the ID2imagepixel.pkl file? |
You can follow this Python script(https://github.com/zjukg/DUET/blob/main/cache/generate_embedding.py) to understand the detailed steps. Essentially, the image embedding is obtained through the image url and then saved into a pkl file |
Thank u very much |
Hello, when I'm running: bash script/AWA2/AWA2_GZSL.sh, I get the following error: RuntimeError: Tensors must have same number of dimensions: got 2 and 4. This is also the case when I run it on the CUB dataset : input = torch.cat((input_v[index].unsqueeze(0), positive_input.unsqueeze(0), negative_input_1_1.unsqueeze(0), negative_input_1_2.unsqueeze(0), negative_input_2_1.unsqueeze(0), negative_input_2_2.unsqueeze(0)), 0) RuntimeError: Tensors must have same number of dimensions: got 2 and 4. Is there a bug in the experiment? |
It may be an issue with the dataset processing. You can check the dimension information of input_v[index], positive_input, and negative_input_1_1, and then perform targeted debugging. |
|
I may have been unclear before, I provided the entire generating of the embedding script. In my impression, generate_embedding.py should not generate an embedding, but a feature (3,224,224). Since it's been a long time, this is a temporary script that we saved for you to refer to to generate ID2imagepixel.pkl, not a complete input/output. |
I may have been unclear before, I provided the entire generating of the embedding script. In my impression, generate_embedding.py should not generate a embedding, but a feature (3,224,224). Since it's been a long time, this is a temporary script that we saved for you to refer to to generate ID2imagepixel.pkl, not a complete input/output. |
Hello, I have the same problem as you, did you solve it? If you don't mind, can you tell us how it was solved, thank you very much! |
Hi everyone, thanks for your attention. Our cache data for (CUB, AWA, SUN) are now available here (Baidu cloud, 19.89G, Code: s07d). @litianjun05090 @Bingyang0410 @wangwangwangj @Even008 @passer |
Hi, I would like to ask one other question, the splitting standard we usually use is xlsa17, but how is the file 'att_splits.mat ' generated? I didn't find the method in the original paper! Can you explain this, thank you very much! |
Already download all the files mentioned but don't know where to obtain APN.mat file
The text was updated successfully, but these errors were encountered: