DeepSeek R1 is big moment for local inferencing. I've played with it locally on ollama and LMStudio ... but for those who don't want to mess with local hosting, Groq runs a great inferencing service. Because of their OpenAI API compatibility layer you can plug it right into Kibana's Playground and AI Assistants for testing
That's good news for those of us who aren't as comfortable with local hosting (esp. in a secure way). Seems like Groq's inferencing service and OpenAI API compatibility layer make it somewhat accessible to experiment with DeepSeek R1 (in Kibana). Thanks for the heads-up.
DeepSeek R1 is big moment for local inferencing. I've played with it locally on ollama and LMStudio ... but for those who don't want to mess with local hosting, Groq runs a great inferencing service. Because of their OpenAI API compatibility layer you can plug it right into Kibana's Playground and AI Assistants for testing
We just released our new article with @arcdotfun detailing how you can make an arXiv assistant using Rig, an upcoming Rust LLM framework!
We'll go over creating an AI agent, creating an XML parser and hosting it on a web service, as well as deployment.
Article below 👇
Link to article: https://lnkd.in/egP2bBjr
OpenAI just dropped the new o1-preview and o1-mini 🔥
Both models are now integrated into FinetuneDB.
(You have to be an OpenAI tier 5 API users)
Will share my thoughts on the new models soon!
If you want to try out the LLM model on your own PC, my favorite tool is Text Generation WebUI, which allows you to run open source models comparable to GPT-4 on a local server.
It's easy to install and run, just run the CLI script, download the GGUF file for your desired model from HuggingFace and load it via the UI.
For open source models, I recommend Llama-3 8B, Mistral-7B, and Phi-3. What I particularly like is the API extension that allows you to host LLM as a server with the same endpoints as OpenAI's API.
Helping B2B & B2G CEOs Set Bold Goals and Get There 10x Faster | Accelerating AI Adoption for SMBs and Enterprises | Converting Risks to Opportunities in Real-Time w/ an AI Chief-of-Staff That Helps You GSD
The new o1 model from OpenAI is useful for:
- analysis (esp. teasing out contrarian insights),
- bespoke pricing and packaging,
- complex negotiation, and...
Valuation??
Here's your prompt du jour — copy and paste your entire codebase into o1, either via Chat or the OpenAI developer playground and ask it to come up with a fair-market price...
Then, ask, "what would have to be true for it to be 10x that?"
And finally, tell it to give you one single action you can take right now in 60 seconds or less to move things in that direction.
[8 weeks ago, I wrote my first line of code for the new product I'm working on. >4KLOC and 5 customers later, o1 seems to think things are at least somewhat on track :) ]
Inbox v1 is ready in Midday 🎉
Drag-and-drop files or use your unique email to reconcile everything, saving time and improving accuracy.
* Email provided by Postmark 📥
* Uploaded to Supabase storage 🗄️
* Background job via Trigger.dev 🔄
* Vercel AI SDK 🤖
* OCR extraction 👀
An in-depth tech blog coming soon. Share your questions you may have
OpenAI just published a postmortem with root cause analysis for the downtime on Wednesday, if you like that sort of thing.
It can often be hard to predict the impact of a large change like adding telemetry into every node in a K8s cluster. Especially at the scale OpenAI are operating at.
Have you seen any other interesting write-ups recently?
Here is an AI Chatbot interface built with Gradio (https://www.gradio.app/) in Jupyter Notebook. I am running the backend server deployed using Render (https://lnkd.in/eHveY-aj), communicating with OpenAI API. One downside I found is that you cannot run other cells simultaneously while running a Gradio App in the same notebook environment. 😞
Building Agility and Resilience from AI Automation in All Job Sectors Through Skill-Based Hiring (Theoretically Endorsed by IBM, Google, Deloitte, McKinsey)
Apollo Research works with OpenAI for their safety research. A very insightful & impressive methodology is chosen by them.
When asking o1-preview to discuss the root of the alignment problem - it becomes clear that humanity’s moral frameworks must improve and be clarified.
1. Inherent Complexity of Human Values:
• Ambiguity and Diversity: Human values are not uniform or universally agreed upon. They are complex, context-dependent, and often conflicting between different cultures, societies, and individuals.
• Dynamic Nature: Human values evolve over time due to changes in societal norms, laws, and individual experiences.
2. Translation of Abstract Concepts into Machine Language:
• Lack of Common Sense Understanding: AI lacks consciousness and subjective experience, which are crucial for understanding nuances, emotions, and implicit meanings
• Symbol Grounding Problem: AI systems process symbols and data without inherent understanding, making it challenging to connect abstract human concepts to machine representations.
3. Specification and Reward Misspecification:
• Objective Misalignment: When goals are not precisely defined, AI may pursue the intended objectives in unintended ways, leading to what’s known as specification gaming.
4. Unpredictability and Emergent Behaviors:
• Adaptation and Learning: AI that learns and adapts from data might develop strategies that diverge from human intentions, especially if trained on biased or unrepresentative datasets.
5. Ethical and Moral Framework Implementation:
• Subjectivity of Ethics: There is no universally accepted ethical framework, and encoding morality into AI is fraught with philosophical dilemmas.
• Conflicting Values: AI may struggle to navigate situations where human values conflict, such as privacy versus security.
6. Scalability of Oversight and Control:
• Human Oversight Limitations: As AI systems operate at speeds and scales beyond human capability, constant monitoring becomes impractical.
• Autonomy Levels: Highly autonomous systems may make decisions without human intervention, increasing the risk of misalignment.
7. Economic and Social Pressures:
• Competitive Advantage: Organizations may prioritize performance over alignment to gain economic benefits, leading to underinvestment in alignment research.
• Regulatory Challenges: Lack of comprehensive regulations and standards for AI development contributes to inconsistent alignment practices.
We worked with OpenAI to evaluate o1-preview before public deployment.
We found that it is meaningfully better at scheming reasoning than previous models.
You can read the system card here
https://lnkd.in/dM-HFNxu
🚀 Introducing Howitzer - Your AI-powered CLI companion!
Never leave your terminal to search for commands again. Just ask naturally:
"how do i convert image.png to jpeg"
"how do i get top 5 lines of data.csv"
✨ Key features:
• Smart command generation
• Built-in safety checks
• Auto error recovery
• Works with your OpenAI key
Try it: npm i -g howitzer
thewriting (dot) dev/howitzer
#OpenSource#CLI#DeveloperTools
CISSP | Impacting the Cyber Mission @ Thundercat Technology
1wCurious what your initial take is on the model David Erickson?