Based on our record, Catchafire should be more popular than MiniGPT-4. It has been mentiond 21 times since March 2021. We are tracking product recommendations and mentions on various public social media platforms and blogs. They can help you identify which product is more popular and what people think of it.
You two have to figure out #1. For #2, is it really starting to earn or just keeping busy? For me, I am using my skillset to volunteer for nonprofits. I found catchafire.org, which matches volunteers to non-profits, projects they submit. They are happy to have someone to help, you get to work at a comparatively leisurely pace, win-win. It's what's worked for me. There are other platforms like catchafire. Source: about 1 year ago
Catchafire.org is a website where non-profits post volunteer opportunities for people with specialized skills. You could get some real-world experience in a sector that may be relevant to your interests—education, the arts, etc.—and potentially a couple of good references for future employers. Source: about 1 year ago
I recommend doing a volunteer gig at taprootplus.org or catchafire.org. Great learning experience, remote work, and they are very tolerant of mistakes and learning curves. If you do good, have them give you a recommendation on LinkedIn. Source: about 1 year ago
Look for project coordinator or project officer role; nonprofits/ NGOs seem to be opening such roles quite often. Also, check out catchafire.org (volunteering for nonprofits/ NGOs), good luck. Source: about 1 year ago
I am still trying to break into the industry and I have some confidence issues regarding my ability to do the job. I have always been a more hands-on person so until I can get my hands wet it's hard for me to feel comfortable. I even saw someone recommend catchafire.org and I even feel incapable of doing these volunteer jobs. Source: over 1 year ago
Isn't there only two open multimodal LLMs, LLaVA and mini-gpt4? Source: 12 months ago
So we use MiniGPT-4 for image parsing, and yep it does return a pretty detailed (albeit not always accurate) description of the photo. You can actually play around with it on Huggingface here. Source: about 1 year ago
We use MiniGPT-4 first to interpret the image and then pass the results onto GPT-4. Hopefully, once GPT-4 makes its multi-modal functionality available, we can do it all in one request. Source: about 1 year ago
But I would like to bring up that there are some multi models(llava, miniGPT-4) that are built based on censored llama based models like vicuna. I tried several multi modal models like llava, minigpt4 and blip2. Llava has very good captioning and question answering abilities and it is also much faster than the others(basically real time), though it has some hallucination issue. Source: about 1 year ago
Https://minigpt-4.github.io/ <-- free image recognition, although not powered by true GPT-4. Source: about 1 year ago
HandUp Gift Cards - Give directly to a homeless neighbor on the street
LangChain - Framework for building applications with LLMs through composability
HandUp Campaigns - Assemble your community to donate to those in need
Hugging Face - The Tamagotchi powered by Artificial Intelligence 🤗
TAP London - Tackling the homeless problem together
Haystack NLP Framework - Haystack is an open source NLP framework to build applications with Transformer models and LLMs.