Hi, great feature!
It doesn't always actually pick "similar images", but that's expected as it has no notion of content. I think some deep learning could do a lot here, I have many years of experience and would be happy to help.
To give an idea, it would be possible to train a model on a relevant task (e.g. correctly assigning tags) and then use it to extract features from the images. Once you have the image features it would be trivial to simply pick the nearest neighbors in feature space. If the model was trained properly the nearest images should be semantically and visually similar (depending on what task was used to train the model).
It doesn't always actually pick "similar images"
At first I was also expecting visually similar images. Maybe "Users also liked/favorited" would be more self explanatory, but I don't mind the current text.
To give an idea, it would be possible to train a model on a relevant task (e.g. correctly assigning tags) and then use it to extract features from the images. Once you have the image features it would be trivial to simply pick the nearest neighbors in feature space. If the model was trained properly the nearest images should be semantically and visually similar (depending on what task was used to train the model).
I was also thinking about playing with something like this to suggest tags when I upload something.
Hi, great feature!
It doesn't always actually pick "similar images", but that's expected as it has no notion of content. I think some deep learning could do a lot here, I have many years of experience and would be happy to help.
To give an idea, it would be possible to train a model on a relevant task (e.g. correctly assigning tags) and then use it to extract features from the images. Once you have the image features it would be trivial to simply pick the nearest neighbors in feature space. If the model was trained properly the nearest images should be semantically and visually similar (depending on what task was used to train the model).
Do you have any tips for frameworks / libraries / general approaches to get started with something like this? I do have more than 0 knowledge of ML but I am not very experienced in it.
Any application of ML to GWM seems to me like it would involve heavy image processing (and probably not apply to videos).
Do you have any tips for frameworks / libraries / general approaches to get started with something like this? I do have more than 0 knowledge of ML but I am not very experienced in it.
Any application of ML to GWM seems to me like it would involve heavy image processing (and probably not apply to videos).
Agree that videos would be more tricky, but for single images once trained, using a deep model is not very expensive. I would estimate that a regular GeForce 1060-70 would be able to parse at least 60 images per sec. Training the model is expensive, but it can be done offline once in a while
As for libraries, i would go with pytorch which is very intuitive. This link is a good starting point: transfer_learning_tutorial. html (search it on pytorch website, i can't post links)
The given example is for single label classification, but it is easy to adapt it to multiple labels (the tags). If we find a platform to collaborate on, i'd be willing to help with code.
I updated this a bit:
Thanks for all the work chainer,
Mind if I ask, how does the highest rating (this day, this week, this year, all time) work? All time is pretty self explanatory but I'm wondering if "this year" works like highest rating for the past 365 days or does it begin with the current year's first day (January 1, 2021)?
I don't know your algorithm, but if I choose a distinctive image (e.g., chest flies for a lean, physique athlete), it provides similar images. To save CPU time, you may want to cache the algorithm's results. I got so many hits, I wound up going through a list of images and the list was probably recalculated several times. Good feature.
No, I just messed around with various aspects of it until it was working well.