For today and tomorrow,
During these two days, I will be presenting
some of the use cases or detailed examples that are noteworthy. But before that,
I would like to start off with some basics, which would be the basic features of DC/OS
that may help you understand why people choose it. I will lay out these basic but core technologies that
enable service, network, load balancing, or other technologies
to tell you what you can do with them. So this is the general overview of my contents. Beginning with what kind of service DC/OS provides, I am going to cover how it is deployed,
and how load balancing is proceeded for deployed services. I will also talk about
how health checking is done in DC/OS as well as what kind of support is provided for
monitoring, logging, or debugging in the platform. And last but not least, “ETC,”
which I am going to talk about in detail later. First is service. As you know, it is a tool that
orchestrates container service. Some of us do not have a clear image of it
and just ambiguously call it docker. But docker is also just
one implementation of container standards. And as you use docker overtime,
you will be encountering number of limitations. Things you want to do, but can’t. Like you try to implement Redis Cluster
in container orchestration, but something stops you. There are many constraints. But in DC/OS provided UCR standard container,
you are free from many of these restrictions. Although there is a learning curve but still, Various features like Kubernetes one-click installation
are all enabled by the UCR container. And there are certain aspects. For instance, dependency on Docker Engine is removed. So you have more flexibility in usage. And it gives you better performance with
faster deploying and downtime. Stability is also highly augmented,
but there are certain preconditions to it. This is a table chart from
our DC/OS Document page. It is a ‘Yes’ or ‘No’ column displaying what
Docker and UCR container supports and doesn’t support. So there are more ‘Noes’ in the Docker side
and more ‘Yeses’ in the UCR side. So in general, UCR works better.
And you can do things that you couldn’t do before. What you can’t do with docker, you can do with UCR.
There are actual use cases of it. One of which is a company that
deployed ERP solution on DC/OS Container. It would have been difficult with docker. Same goes for this table. There’s a long list,
but this is not important. Moving on, But there is one realistic issue. Which is that despite what UCR can do,
not many people use it. Most people use docker instead
because it’s easy and simple in use. Most of the applications that we actually use in the field can be covered by docker. From simple web server to
cloud native applications, and so on. These can all be covered by docker.
So you can say docker is the trend. However, there are things you cannot do with docker
like the Catalog Service on DC/OS. For example, Kassandra cluster in container orchestration environment, Kafka cluster, Spark cluster, or ELK stack like cluster in elasticsearch. All of theses are quite
challenging to be done with docker. Even after it’s been built,
there will be other difficulties while running it. But DC/OS’s UCR container provides stable service via Catalog Service. So this is one of the top reasons for using DC/OS. You do not have to manually install
elasticsearch ELK cluster yourself. You try to modify the setting to docker bases and this is literally a fruitless effort. without such tedious task,
installation works fine on DC/OS. Of course, actual usage takes some individual effort,
but the time required to get to that point is saved, which significantly reduces
the overall system running time objective. Among the numerical letters includes Kubernetes. Like Mr. Freiberg said earlier,
using is not the issue with Kubernetes. It is the installation part
that gives you the struggle. When installing in Bare Metal or cloud, you will bump into numerous problems including operation system and kernel issue. Mesosphere DC/OS frees you from those
by a simple click of a button. Of course, it’s not issue-less,
but only the minor ones. And you have these. When you say ‘service’ in DC/OS, it means deployment through
container formation, and orchestration. If there’s 30 clusters in use,
you need to come up with plans on how or where to distribute. Because clusters aren’t
always in the same one rack. It could be in different buildings or it could be in the same building
but divided in multiple different racks. For example, let’s say
a service is distributed in a single rack. and if the power is out,
then the service will have to shut down. So when we actually operate
and there are multiple racks, then we store them separately and
for certain applications that require local disk, in order to secure stateful condition, oftentimes, we would have to
physically go to the machine to redistribute. Not everyone uses high performance storage
and so many still use local disk. Even ELK stack is operated via local disk. A function called ‘Operator,’ which helps
such settings on servers, is provided. So I took out everything else,
and decided to focus on Operator only. It’s a new feature, a type of arrangement function for fairly simple tasks. In CLUSTER operator for instance, we have clusters A and B in DC/OS operation, and If I want to distribute
my application only in cluster A, DC/OS automatically distributes
but you still need coding for the feature to be running properly. So you need certain work process
to make the distribution running and these functions are already provided for you. So for example, there is an operator feature
that lets you assign cluster numbers in the distribution. You can use LIKE operator to distribute
only to certain IP band host whereas UNLIKE would do its opposite. and with MAX_PER,
you can make maximum distribution plan such as distributing maximum two racks among the three existing racks. So this operator is very important. Engineers I personally know, have claimed that
year 2019 will be the generation for the operators. If you search operator,
you will know it’s getting much of the attention. And if you look at the service setup,
these are all the possibilities. It is the operator with constraints.
It says host name cluster. It means the operator will only distribute to
one host that has the name ‘host.’ Which host to distribute,
it will automatically decide. Above it,
there is “host name cluster a.specific.” This means it would distribute only to
a cluster with that host name. Likewise, you can plan
a distribution strategy in various ways. and if you look at left bottom box, it says “rack ID like 1~3.”
This is a regular expression. It essentially means that it would
only distribute to rack ID between 1 to 3. There are many ways you can
make use of such rules. In MAX_PER, you can make it distribute
two per RACK with ones that have Rack ID. You can also do sole distribution
to a host that has unique name. For example, if we build a cluster with
host name Redis cluster, we would build Redis cluster 1~6 total six clusters and master-slaves according to it. And we can make the Redis to distribute to the same location even in redistribution. You can also do Group by or IS.
You can have diversity in distribution plan. You might wonder why this is a good thing. But the actual operators
know how great this is. So you have such service strategy and distribution plan. Docker container orchestration system also has load balancing automation. You may have heard of it. One strategy is using Layer 4(L4).
This is provided in the platform. and so when you use AWS,
you have on top the Elastic Load Balancer(ELB) and if you use DC/OS, you would do
load balancing in the public node. You can make various load balancing strategies with
what’s inside the DC/OS cluster. For example, you can do DNS based load balancing and reverse proxy based load balancing as well. Normally, if you do DNS based load balancing, you’d do it in round robin format that lets you do load balancing with applications distributed across the nodes. You can do lot of different things.
and if I put them in letters, it’s like this. you can designate certain address with
such IP port and do load balancing or you can do zookeeper based, DNS based, or VIP based form is also possible. Why you need so many forms of load balancing I will explain with one example. You can’t do DNS based clustering with Redis because DNS clustering is blocked. So you have to do IP port based clustering and with Redis, you can communicate internally with address based So you have to use such feature
to utilize cluster. Likewise, you really are given with multiple strategies. Up until now, I talked about VIP and DNS based load balancing. This is what we use the most with DC/OS. It is called Marathon LB. Many of you may know it as the HA-Proxy. So you can wrap HA-proxy
with Marathon API with load balancer provided by Marathon,
which also includes the L7 Layer feature. It provides messenger service
with web socket and even TCP with reverse proxy
to enable load balancing. General functions provided by web server, for instance, enabling HTTP service using SSL license or setting up location based on URL or you can put ACL with header value license. You can also use the sticky session-like function to centralize connection. You can use it for thumbnail servers. Thumbnail servers usually do caching, right? If you do caching in all of the distributed thumbnail servers, then there will be a huge volume assigned for one thumbnail server. For example, you put the key in the thumbnail header so that the HA proxy does load balancing based on Hash ID, then the identical thumbnail will always go to the cache server. and so this is another possible strategy. Of course, virtual host is also possible. One crucial downside is that
you cannot use 80port. Many of the HA-Proxy users have requested us to enable 80port as well. But instead of 80port, you can use the virtual host to make it look like you are using 80port is the HA-Proxy strategy. For example, if you use web server like Nginx,
you do not have such restriction, but do keep in mind that that particular restriction does exist in here. So this kind of strategical usage is possible.
I am sure you all know this. Left is in a general format
while the right one is applicable on AWS. You can also implement auto-scaling
via Marathon LB API by deploying a separate
auto-scaling container. That is another possible strategy. You can use Marathon LB
both in public and internal but you have to keep one thing in mind. Marathon LB does
tremendous amount of work. It’s very light but because of the
amount work it handles, you would normally select few as public nodes and provide service externally. However, you can also use it in the salve node internally. If you have 40 containers lined up
to perform identical work and while doing internal load balancing, you want to put more weight
on certain container or apply certain ACL to distribute the traffic of containers with same function. Likewise, you can have Marathon LB for internal use. but one thing you have to be careful is if you use too much of it,
it affects the entire service. So in internal, you need strategies
like using them in individual rack or zone. And if you run too much of public,
then you would be wasting resources so I recommend you to adjust the use depending on the service. Next is monitoring. Many features, APIs, or possible parameters for
basic monitoring, logging, or debugging are provided in DC/OS. I couldn’t write all that are being provided
so I have only listed a few here. More detailed list is on the website.
Could you open the link, please? Please scroll down slowly
all the way to the bottom, please. There is a long list,
it provides all the data or parameters that the administrator or IT infra side engineer wants to collect and check. Normally when you don’t use such platform and use regular Bare Metal to distribute, you would do so by installing a separate log solution or an agent to collect separately. That would be the usual management case. But these are already embedded in DC/OS. collecting a metric for a service,
checking basic CPU, disk memory usage rate, or even collecting and displaying log
like a Zipkin log is possible. There’s already numerous factors in log and
so you just have to figure out what to use not that you want to do or use something but
you can’t because there isn’t a function you want. Logging side also provides many factors that collect managing user’s access log, task job real time operating log,
and task loggings. And it doesn’t just provide logging. This logging on the left of the slide is Kibana ELK
and the right side is a Splunk screen. DC/OS provides interface that enables you to easily integrate on ELK or Splunk. Debugging is yet a beta version. You can do debugging with all that is on list of the left side. and CLI type debugging is also possible. If you want to see certain category status with CLI, you can do so literally anything and everything there is and you can create and decorate your own dashboard with the what is collected. All of these are applicable. Due to the unexpected delay,
I covered my session a bit quickly. And so we have reach the last part already. The title of my presentation was MSA intergration blah-blah. So I want to tell you this. The reason we use DC/OS is
because we want to make good service. So the title is “Make Good Service with MSA.” I did not write microservice here because I think the letter M in MSA does not represents ‘Micro.’ Instead,
I often refer to it as managing. We use MSA to effectively manage and
regulate small services in distribution environment, not to make it small or micro. So for DC/OS, we need things like container,
load balancing, monitoring, logging, and debugging. And we would also need the most important feature, which is developing. Let me explain developing with a rearranged diagram. So we need to put effort
on the actual service developing. Designing, programming, or testing
are all what must be handled by people. each individual working as a team
communicating, interacting, developing, and putting effort to develop better services making agile or waterfall. These are all done by people.
We always need people working together here. And that is why we thrive to make better service because that makes the an output a good service
that we are satisfied with. What I think the people of the industry is doing wrong is, as I have mentioned before, Project development teams spend
too much time on things like containerizing, load balancing, monitoring,
debugging, logging, or building CICD pipeline. So if the total development period is 6 months,
about 3 months are spent on doing these. The entire team or the essential staffs
who must be concentrating on developing a good service, are actually spending more time on these tasks. Let’s say a team wants to initiate Kubernetes. Its members spend majority of time making the installation and operation to function well, and that’s why they cannot develop services. Their service open date is only a month away
and yet, they still cannot run it properly. or they haven’t reached to
developing service part yet, but they are still learning the API gateway. These are kinds of the things happening. So realistically speaking,
I see a lot of you smiling, If you have experinced this kind of dillemma,
would you please raise your hand? Yes, a lot you. So this is the reality we are living in. So Mesosphere DC/OS or Mesosphere the company itself is essentially insisting that these blue parts – container service stability,
load balancing, debugging, monitoring, logging, CICD pipeline, Kafka installation,
Spark cluster configuration, hardware ETL provision, Kassandara fastdata pipeline
– will be taken care of by its service. We want to save your time so that you can
use it on developing a better service strategy. That is the kind of development strategy,
we are continuously recommending you. There’s total of 21 steps in Kubernetes installment but you can use our Package Install Kubernetes to save up time. Has anyone installed Kubernetes on AWS? You can install it but operation is very difficult. Let’s say there are three teams A, B, C in one company
and all three of them want Kubernetes. But as many of you know, you need to use
Large 8 level to run Kubernetes on AWS well. What it is is.. It’s a good thing. You need Large 8 with 10G backend network, but one Large 8 with Bare Metal or VM costs
￦20,000,000 (approx. $17,000) for maintanence If A, B, C all three teams want Kubernetes,
the company would have to give each 10, total 30 Large 8, which is
a significant amount of cost for maintanence. But if you use DC/OS,
You only need about 15 Large 8 and you can run three kubernetes service,
which would allow all three teams to use it. What I mean when I said
DC/OS is cost-efficient is this. Of course, for a company with considerable financial asset,
it would be a minor investment. So this is the end my presentation. I hope that we could really concentrate on what
we should be focusing on, which is development that would ultimately lead to flourishment
of the Korean companies with great services. Companies like Uber and GE has attended this conference. Likewise, I hope that someday,
our engineers could participate in international conference
as a presenter of a great service. And if I may, I would like to
briefly introduce our company. What time is it? I finished early as I had planned. Our company is called KBSYS
and we were founded in 2015. We do a lot of different things These are what we do.
So basically we do anything and everything. It also says we do MSA well, but really, we just do it. You only know when you actually try it, right? And we are the official partner for DC/OS not only in Korea but also in many countries in Asia as technology conductor. These are our achievements. We also do things like
media industry system development. This is part of our company introduction.
and there was no other place to add this except for my slide. So KBSYS is a good company such and so. Thank you. Does anyone have a question? Thank you so much for your presentation, Mr. Lee We will now have a Q&A session. If you have any question for Mr. Lee, please raise your hand. Please pass the microphone. I would like to know the compatibility between UCR and docker runtime and also, if the Catalog Service can be
provided via docker runtime. UCR container and docker runtime are different. Docker runtime is a
mere implementation of Linux LS container. So you can run UCR container
without docker engine. Even if you’ve been working on images with docker, you cannot use it the same way on UCR. Instead, you have to creat a new UCR. It certainly works better, but you have to study more. There’s a bit of learning curve to it. And docker is a different story. So docker and UCR runs separately from each other and
they are both in Linux Kunnel. And for docker, it’s not docker itself that’s running.
It’s the container system in Linux that is running. It’s a process running on a deployed container on docker. So from Linux side,
docker and UCR are pretty much the same. It’s a different type of implentation technology.
Which one to use is depended on the developer. Use docker for what can be done with docker because its the easiest. If not, then you can lay down the possible container technology options and UCR is just one of them. Did that answer your question? You said there are multiple Catalog Services.
Are they docker based? Catalog Service is mostly UCR based.
I don’t think there is a docker based one as far as I know. If I use docker, can I not use Catalog Service? I don’t think you can use the service with docker. But I think that question could be better answered by Sam Chen over there. Would someone please pass him the microphone? And for the questioner, please put on the receiver. Right. So I think the question is whether
the services in the Catalog are based on UCR or docker. It’s actually up to the vendor
or the company that makes the service. Right, so some services are running on docker
a lot of them are also using UCR. It’s really up to the vendor to choose which type of run time they want to use. Yeah, we are agnostic, right?
So for some companies, they are used to docker. They operate on dockers.
So that’s more comfortable for them. For other companies,
maybe they have some issues with docker. in production at scale so they would
prefer to use UCR just as a run time. But you can still use docker to do development, right?
UCR can leverage docker images. There’s no configuration that has to happen. We can pull from any docker registry.
It will just run UCR as a run time. Would that answer your question? So the question is can they coexist.
Right now, they do coexist. Right, there’s two options when you deploy service. You can choose docker or you can choose UCR. Right, so all you do is provide the image tag and you specify the run time and it will just choose one or the other, based on what you want. So the question is can it be defined at the beginning or on the go. So when you install DC/OS,
docker gets installed in every single node, Right? But when you actually run a service,
that’s where you decide which run time you want to use. Okay. Any other questions? It appears that the question has been answered. Hope you all have a great night and please give a big round of applause for Mr. Lee!