9 Comments
May 22Liked by Alejandro Piad Morffis

I really enjoy this series - comprehensive and clearcut!

Expand full comment
author

We gotta do some collab man

Expand full comment
May 22Liked by Alejandro Piad Morffis

Indeed. We just have to be better at figuring out the WHAT and then coordinating it!

Expand full comment
author

I'm now more focused on this series which is ultimately for developers, but in the near future I want to do a series on LLMs for power users but without any coding involved. Tips and tools to do everything from auto responding your email to creating slide decks to you mention it, just with clever prompts and glueing together free apps and services.

Expand full comment
May 22Liked by Alejandro Piad Morffis

Nice, that sounds like a project that's not out of my league. Could be fun to try and put our minds together on it. I've also been toying with the idea of a very low-key "zero-to-not-hero-but-kind-of-okay" course that gets complete beginners into the basics of prompting, how LLMs can help, what to watch out for, etc. Then again, I'm sure that's already been done to death by many others, so it'd have to have a unique and engaging angle to be worth it.

Expand full comment
May 22Liked by Alejandro Piad Morffis

1. Can a non-specialist train an LLM to respond to simplifie reduced keyword prompts like: 'summarize' or "who developed..."

2. Can an LLM be trained to provide sources with output {I think not but maybe...}?

Expand full comment
author
May 22·edited May 22Author

I think the answer to both is "yes, partially at least".

1. There are no-code platforms like monsterapi.ai that will let you fine-tune an open source model like llama 2 with a point and click interface. Now, I'm not sure deploying that model to use it in application doesn't require some coding, but I'm certain if it's not possible today, it will be in a few months. No-code is at an all time high and even more so with LLMs.

2. If you look at my latest coding lesson article on building an answer machine, we actually make the model provide sources (which are crawled from Google but that can be changed to whatever repository you need) and output citation numbers and all. And that's just prompting, no fine-tuning. There is still the caveat that the model can hallucinate a source, or worse, assign the wrong claim to a source. With fine-tuning you can possibly reduce hallucinations to a reasonable degree but I don't think they can be eliminated completely.

Expand full comment
May 22·edited May 22Liked by Alejandro Piad Morffis

Excellent list!

You may have covered these as part of the broader categories; some more areas that I use it for generate ideas, proofreading a document/text, technical troubleshooting, and how to do something (for example, how to do something in the Excel)

What generative AI is being used for:

Technical Assistance & Troubleshooting (23%)

Content Creation & Editing (22%)

Personal & Professional Support (17%)

Learning & Education (15%)

Creativity & Recreation (13%)

Research, Analysis & Decision Making (10%)

And a very long list at the end of the article:

https://hbr.org/2024/03/how-people-are-really-using-genai?utm_campaign=The%20Batch&utm_medium=email&_hsenc=p2ANqtz-9xZzLS9UdHky2x7OQHmhhrsVIV-pjkmMHdvNrsQcZHPOQykj7os2ocYdjh3vjmsdEIw090bUUNg_YFXaiS18ZfTtyG6qKijyyg_yrUs5QsdnHX7uI&_hsmi=302093078&utm_content=302093078&utm_source=hs_email

Expand full comment
author

Thanks for the stats, it's really interesting. Yes, I was more focused on high level abstract tasks, so many of the applications you mention are in some way combinations of these tasks.

Expand full comment