The website shows Kayak, P&G, NVIDIA, City of San Jose CA ![]()
Sounds like you’re about as unbiased as Gartner… Here’s a file cabinet specially made for you and Gartner to file your favorite analyst reports ![]()
But you send everyhing to the cloud isnt it ? Pr you can mount direct tunnels from edges ?
Good luck service chaining that with HPE integrations being the usual hellhole…
This is why customers are dumping zScaler as well, you can ‘connect’ it, but they are separate products and don’t offer the core idea of SASE where a single cloud native platform can provide most if not all network functionality in a single pane of glass.
350+ PoPs? Doing what exactly? Can you point the community to their PoP status page that lists where all their PoPs are?
Did you even read my reply?
If you’ve got anything valueable to add, go ahead.
But I’m pretty sure you don’t have anything near to a valuable insight about this technology.
Yes, their PoPs are transit for everything. There are options to do traffic directly between locations if necessary, such as replication traffic, but in most scenarios that becomes obsolete. Also, latency from transferring everything through PoPs is negligible in most areas and offers performance gains anyway through TCP optimization etc. The concept of routing everything through PoPs offers many advantages that only become clear along the way. Such as zero hardware bottlenecks when enabling TLS decryption even for internal traffic, standardization of breakout paths for applications with just a couple of clicks etc…
The homepage shows AWS, Azure, GCP and Oracle cloud, the numbers add up. https://www.axissecurity.com/ What would you like to see the PoPs do exactly?
Yea, you said when Gartner ranked Cato #1 they were right, now they rank PAN #1 they’re wrong, so basically you’re just a Cato employee that supports Gartner when they support you. Wow, how insightful, can you please add more “valuable insights” to this thread? We are so lucky to have you educating us on analyasts! ![]()
Yep interesting but yeah it seems not obvious that routing everything to the POP will not imply performances issues…
Even if you combine all thos public clouds together there aren’t 350+ PoPs. Furthermore, if they are characterizing public cloud locations as a “PoP” simply running a virtual server instance of their tech…by that same logic I guess every other supplier out there that supports virtualization of their tech in public cloud has the same number of PoPs?
At any rate, you can do the math…nowhere near 350+ locations out there with the 4 big public infra cloud providers.
Like many suppliers, their marketing likely way overstates what they have and overdramatizes anything unique over what most other suppliers can build in the same way using “yesterdays” technology.
Edit: I found the excerpt on their homepage:
Atmos has 350 edge locations (across AWS, Azure, Google and Oracle), PoPs across 5 continents, and uses smart-routing as part of its full network-as-a-service.
Note the reference to EDGE LOCATIONS and the fact that they are making a distinction between edge locations and PoPs. This is ALL marketing just like Palo does for Prisma. Edge locations don’t actually supply any services. It’s a marketing move to amplify a footprint that doesn’t actually relate to a value. Basically, if you go to Googles network you will see them talk about their edge locations in mass. This is what Axis and PANW use to inflate the value. All the actual services, e.g. ZTNA, CASB, etc. are supplied in the PoPs themselves where IaaS is available.
For GCP, that’s like 36 locations? AWS 32? Etc.
From what I can tell k0d31ne only mentioned that Cato was listed as a sample vendor in their hyper cycle. So were other vendors, like Palo. Not sure I see where anyone said (other than you) Cato was #1 ranked by anyone. For what it’s worth, Gartner is not a “pay to play” analyst firm. There are likely many that are, though.
In terms of adding value, I kind of feel like k0d31ne was at least adding context in how a certain vendor/supplier operated their architecture and platform and provided his/her opinion based on the authors request.
From what I can tell, CustardBeneficial766, all you’ve done is be critical of k0d31ne’s opinion and not actually provide an opinion that helps the author in anyway. Perhaps I misunderstood your intention to help the author here?
We only avoid datacenter firewalling scenarios where local IDP/IDS is more obvious for locally segmented traffic. Apart from that, 5-10ms additional latency to a PoP will never be noticeable by any user on i.e. HTTP traffic.
Quite the contrary, with the exception of what @k0d31n3 mentions where you have heavily localized workloads between different segments in a datacenter.
The PoPs, often times, yield a throughput multiplier over that of public internet based IPVPN WANs and private MPLS networks. In fact, the optimizations work for ALL directions of traffic (including northbound to the internet).
Lol what vendor do you work for… Virtualization? Modern edge networking with cloud native containers can use edge on ramps with tech like AWS Global Accelerator, that reduces the number of locations the software has to run vs legacy virtual appliances in “cloud” or hardware appliances in Equinix, etc.
The old way is creating a big network and boasting about having a bunch of PoPs as you sink tens or hundreds of millions into a rigid infrastructure that can’t keep up. The new way is delivering cloud native edge with tcp optimization, session centric routing and leveraging native cloud tier 1 edge networks to avoid ISP hops.
The old guard are glorified routers / firewalls / VPNs in a bunch of locations they call cloud, might as well be rackspace. The new guard is building SSE/SASE like a modern website.
No vendor in SSE can outperform all the cloud hyperscalers combined… so don’t bring a virtual appliance in a PoP to a Cloud Native Edge fight ![]()
(avoids taking the bait)
Precisely my point. I think maybe you misunderstood my comment? I think we are actually debating the same thing and mostly from the same side.
By the way, the way you deliver a cloud native edge with TCP optimization, session centric routing, fully converged network security services and full contextual awareness…is through building a big ole network in top tier DCs (like Equinix) with tier 1 providers and actually controlling your footprint and platform top to bottom. These two points seem to be at conflict in your comment, but you don’t actually get one without the other.
Placing resources in someone else’s cloud means you give up a lot of the control you claim to be fundamentally important. Running your platform in AWS, GCP, Azure, etc. means you don’t get to select your Tier 1 peers. You use whatever the public cloud gives you. You don’t get to select which IXs you join to optimize ingress/egress into your cloud for your consumers. Your service IPs belong to the public cloud provider. Your footprint/markets are limited to where there’s are. You don’t really get to control your routing between PoPs. You can either use public cloud backbone or public internet. You don’t actually have your own backbone. By the way, at scale, those cloud consumption cost can get crazy. How does that translate long term for customers? I can tell you what it did for Splunk! ![]()
Furthermore, AWS accelerator and Google edge service locations, in the context of Zero Trust, network security or Cloud App Security (aka SSE), provide no real value. You mention hop count reduction, and although hop quantity can represent some risk, the real issue is distance. Hop count reduction doesn’t necessarily translate to distance reduction. Let me put it into real context (extreme case, but its easier to make the point)…
PANW uses GCP and markets the value of the vastness of their PoPs through partnership with Google edge (something like 150 locations around the world - sounds like Axis does the same). In Peru, a user would onboard to said Google edge location locally in Lima. Wonderful, right? Now that the user is “on the network” let’s do the most basic inspection of their traffic before allowing them to access whatever site they want to visit, a.k.a SWG. User first has to go all the way to Brazil, the closest “service/PoP” location for actual inspection of the traffic before it can carry on to its final destination. Doesn’t sound like hop count reduction to me. Doesn’t sound like that Google edge location is doing anything of real value.
So, in summary, the marketing suggests a wonderful “hypothetical”, but reality is far different. Always best to test/PoC. In my experiences PANW performs worse than the public internet, let alone achieve some improvement by way of TCP acceleration.
Cato Networks, by the way.
WARNING - commercial coming
In the above example, with Cato, the same user would connect to the Lima Cato PoP where ALL services reside. In fact, all Cato PoPs perform all inspection services. And, if that user suddenly needed to acces file shares hosted in their Ashburn, VA datacenter, they would actually experience the value of a backbone between Cato PoPs located in Lima and Ashburn on Cato’s own Cloud with Cato selected tier 1 providers in Cato selected Top Tier datacenters. Since there are multiple Tier 1 providers that service a Cato PoP, Cato would/could actually make a packet by packet route decision over its core to ensure the most optimal path is taken and create a predictable experience. Of course, all TCP traffic would be accelerated as well.
I won’t shout out the quantity of PoPs Cato has so as not to be caught boasting. ![]()
I’d respond but then you’d respond and I think enough time has been wasted. Seeya in the field where it counts and we have to prove things rather than just say them ![]()
Peace out. That was my suggestion as well, by the way. Test it out.