No CommonCrawl videos yet. You could help us improve this page by suggesting one.
Zabbix has been part of my toolbox for quite some time. I can easily say it's an indispensable tool for me now.
Managing a dozen servers without Zabbix would be unimaginable. I'm monitoring all of this: CPU, Memory, Hard-drives, website response times, downtime. The UI might be a bit "old school", but everything works flawlessly.
With regards to hard-drive monitoring, I love the machine learning option that allows you to "predict" the number of days before running out of space. That's quite helpful, as I've got some of my servers down due to running out of space multiple times in the past (before I was using Zabbix).
Based on our record, CommonCrawl seems to be a lot more popular than Zabbix. While we know about 91 links to CommonCrawl, we've tracked only 5 mentions of Zabbix. We are tracking product recommendations and mentions on various public social media platforms and blogs. They can help you identify which product is more popular and what people think of it.
Common Crawl Foundation | REMOTE | Full and part-time | https://commoncrawl.org/ | web datasets I'm the CTO at the Common Crawl Foundation, which has a 17 year old, 8. - Source: Hacker News / 2 months ago
Https://commoncrawl.org/ is a non-profit which offers a pre-crawled dataset. The specifics of individual tools probably vary. I imagine most tools would be based on academic datasets. - Source: Hacker News / 6 months ago
Should the NYT not sue https://commoncrawl.org/ ? OpenAI just used the data from commoncrawl for training. - Source: Hacker News / 6 months ago
What you’re likely referring to is Common Crawl: https://commoncrawl.org. - Source: Hacker News / 6 months ago
> ... a project called "Nutch" would allow web users to crawl the web themselves. Perhaps that promise is similar to the promises being made about "AI" today. The project did not turn out to be used in the way it was predicted (marketed), or even used by web users at all. Actually Nutch is used to produce the Common Crawl[0] and 60% of GPT-3's training data was Common Crawl[1], so in a way it is being used... - Source: Hacker News / 7 months ago
Official Zabbix trainings, documentation on zabbix.com ? Source: almost 2 years ago
Hallo, do you know a howto to install zabbix on an ubuntu 20.04 ? I tried the manuals from zabbix.com for MySQL Apache but it didn't work. Source: about 2 years ago
He suggested that I indeed should set up a home-lab. To be specific he said that I should create a minimal install of Centos 8 and install zabbix server on it (https://zabbix.com) and monitor a whole bunch of other VMs, services and stuff.. He said that I should set up a variety of VMs and also maybe host a website on one of them. And then if I was able to do that, I could help to share a load of zabbix related... Source: over 2 years ago
This is a fresh 21.10 install, using the install repo as detailed on the zabbix.com download page. Source: over 2 years ago
Well, if you can't find anyone, I am more than happy to fill the slot with something regarding Zabbix - just let me know ;). Source: over 2 years ago
Scrapy - Scrapy | A Fast and Powerful Scraping and Web Crawling Framework
Datadog - See metrics from all of your apps, tools & services in one place with Datadog's cloud monitoring as a service solution. Try it for free.
Apache Nutch - Apache Nutch is a highly extensible and scalable open source web crawler software project.
Nagios - Complete monitoring and alerting for servers, switches, applications, and services
StormCrawler - StormCrawler is an open source SDK for building distributed web crawlers with Apache Storm.
Dynatrace - Cloud-based quality testing, performance monitoring and analytics for mobile apps and websites. Get started with Keynote today!