Docker【7】 您所在的位置:网站首页 最全的应用 Docker【7】

Docker【7】

2024-07-16 23:14| 来源: 网络整理| 查看: 265

大家好,我是脚丫先生 (o^^o)

虽然,一直从事的是大数据和后端的工作。

只不过,大家委以重任,产品的交付都是我去完成yyds(差点没累死)。

小伙伴都知道,微服务和基础环境的搭建,会有各种环境的依赖,那无疑是一件很痛苦的事情。

【针对这个问题】,我们必须找出来一个高效的处理方法,从而更加便捷的完成任务。

因此今天我就把多个的应用以docker-compose的形式告诉小伙伴们,大家可以根据自己

的需要进行编排,高效快速的完成产品交付,解放双手,势在必行。在这里插入图片描述

前言

只提供docker-compose文件没有提供镜像那就是大逆不道,午门斩首。

为了避免小伙伴们飙垃圾话,我把镜像也提供给大家,方便小伙伴们彻底解放十姑娘。

古之学者必有师。

希望带着大家漂进Docker的海洋、从此一去不复返~。在这里插入图片描述

链接: https://pan.baidu.com/s/1Bz1VVL-eq_yZVh2G3_PMXw 提取码:yyds

小伙伴们,可以自行更改镜像的标签:

1)把镜像tar包导入为镜像 docker load < 镜像.tar 2) 更改镜像的标签 docker tag 2e25d8496557 xxxxx.com/abc-dev/arc:1334

2e25d8496557:IMAGE ID,可以用docker images 查看镜像ID

xxxxx.com:私有hub域名

abc-dev:项目名称

arc:镜像名称

1334:镜像版本号

下面我们进正式的进入一条龙服务。

(docker-compose应用容器的部署会持续更新,总结)

一、数据库相关

数据库的重要性不言而喻了,所谓两军交战,粮草先行。

数据库就好比粮草,它是我们去开发应用的根本,但是它的种类是多种多样的。

小伙伴可以根据自己的业务需求进行选择。

通过docker-compose快速搭建数据库,只需三秒,划时代的改变了传统的繁琐。在这里插入图片描述

1.1、Docker Compose 部署 mysql

首先在自己确定的目录,比如/usr/local下。

新建mysql文件夹。

之后在该mysql文件夹下编写docker-compose.yml。

(之后的容器部署,与mysql容器部署相同)

1)编写docker-compose.yml文件

[root@spark1 mysql]# vi docker-compose.yml version: '3.0' services: db: image: mysql:5.7 restart: always environment: MYSQL_ROOT_PASSWORD: 123456 command: --default-authentication-plugin=mysql_native_password --character-set-server=utf8mb4 --collation-server=utf8mb4_general_ci --explicit_defaults_for_timestamp=true --lower_case_table_names=1 --sql-mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION ports: - 3306:3306 volumes: - ./log:/var/log/mysql - ./data:/var/lib/mysql - ./conf:/etc/mysql

注意: 在有网络的环境下,会去自己搜索镜像。

如果,在内网(没有网络),那么需要自己先去下载镜像,导入服务器里。

可以参考: 前言我的叙述,以及为小伙伴们准备的镜像tar包。

2)启动容器

[root@spark1 mysql]# docker-compose up -d 1.2、Docker Compose 部署 redis

1)编写docker-compose.yml文件

[root@spark1 redis]# vi docker-compose.yml version: '3.0' services: redis: restart: always image: 10.1.119.12/gx/redis:5.0 container_name: redis ports: - 6379:6379 # 映射持久化目录 volumes: - ./data:/data # requirepass:配置登录密码 # 开启 appendonly 持久化 command: "/usr/local/bin/redis-server --requirepass cetc@2021 --appendonly yes"

2)启动容器

[root@spark1 redis]# docker-compose up -d 1.3、Docker Compose 部署 postgresql

1)编写docker-compose.yml文件

[root@spark1 postgres]# vi docker-compose.yml version: '3.0' services: postgres: restart: always image: 10.1.119.12/basic/postgres:11 privileged: true ports: - 5432:5432 environment: POSTGRES_PASSWORD: postgres //密码 PGDATA: /var/lib/postgresql/data/pgdata volumes: - ./pgData:/var/lib/postgresql/data/pgdata

2)启动容器

[root@spark1 postgresql]# docker-compose up -d 1.4、Docker Compose 部署 oracle

1)编写docker-compose.yml文件

[root@spark1 oracle]# vi docker-compose.yml version: '3.0' services: oracle: container_name: oracle image: registry.cn-hangzhou.aliyuncs.com/helowin/oracle_11g ports: - 1521:1521 restart: alway

2)启动容器

[root@spark1 oracle]# docker-compose up -d 1.5、Docker Compose 部署 influxdb

1)编写docker-compose.yml文件

[root@spark1 influxdb]# vi docker-compose.yml version: '3.0' services: influxdb: restart: always image: 10.1.119.12/basic/influxdb:1.8 container_name: influxdb privileged: true ports: - 8083:8083 - 8086:8086 volumes: - ./data:/var/lib/influxdb/data - ./conf:/etc/influxdb

2)启动容器

[root@spark1 influxdb]# docker-compose up -d 1.6、Docker Compose 部署 neo4j

1)编写docker-compose.yml文件

[root@spark1 neo4j]# vi docker-compose.yml version: '3.0' services: neo4j: container_name: neo4j image: spins/neo4j-community-apoc:3.5.5 ports: - 17474:7474 - 17687:7687 restart: always volumes: - ./data:/var/lib/neo4j/data - ./logs:/var/lib/neo4j/logs - /tmp:/tmp deploy: resources: limits: cpus: '1.00' memory: 1024M logging: driver: "json-file" options: max-size: "50M" max-file: "10" environment: - NEO4J_AUTH=neo4j/123456

2)启动容器

[root@spark1 neo4j]# docker-compose up -d 1.7、Docker Compose 部署 opentsdb

1)编写docker-compose.yml文件

[root@spark1 openTSDB]# vi docker-compose.yml version: '3.0' services: opentsdb-docker: image: petergrace/opentsdb-docker:latest container_name: opentsdb network_mode: "host" privileged: true environment: - WAITSECS=30 ports: - 4242:4242 volumes: - ./data:/data/hbase # 数据所在目录 - ./opentsdb/opentsdb.conf:/ect/opentsdb/opentsdb.conf # 配置所在目录

2)启动容器

[root@spark1 openTSDB]# docker-compose up -d 1.8、Docker Compose 部署 sqlserver

1)编写docker-compose.yml文件

[root@spark1 sqlserver]# vi docker-compose.yml version: '3.0' services: db: image: mcr.microsoft.com/mssql/server:2017-latest restart: always container_name: sqlserver environment: ACCEPT_EULA: Y SA_PASSWORD: cetc@2021 ports: - 1433:1433 volumes: - ./mssql:/var/opt/mssql

2)启动容器

[root@spark1 sqlserver]# docker-compose up -d 二、环境基础相关 1.1、Docker Compose 部署 tomcat

1)编写docker-compose.yml文件

[root@spark1 tomcat]# vi docker-compose.yml version: '3' services: tomcat: restart: always image: tomcat container_name: tomcat ports: - 8080:8080

2)启动容器

[root@spark1 tomcat]# docker-compose up -d 1.2、Docker Compose 部署 minio

1)编写docker-compose.yml文件

[root@spark1 minio]# vi docker-compose.yml version: '3' services: minio: image: minio/minio:latest restart: always container_name: myminio ports: - 9000:9000 volumes: - /usr/local/dockers/minio/data:/data - /usr/local/dockers/minio/config:/root/.minio environment: MINIO_ACCESS_KEY: "minio" MINIO_SECRET_KEY: "minio123" command: server /data

2)启动容器

[root@spark1 minio]# docker-compose up -d 1.3、Docker Compose 部署 elasticsearch

1)编写docker-compose.yml文件

[root@spark1 elasticsearch]# vi docker-compose.yml version: '3.1' services: elasticsearch: #服务的名称 image: elasticsearch:7.16.1 #指定镜像的路径 restart: always #启动docker,自动运行当前容器 container_name: elasticsearch #容器名称 ports: #指定多个端口 - 9200:9200 #映射的端口号 environment: discovery.type: single-node

2)启动容器

[root@spark1 elasticsearch]# docker-compose up -d 1.4、Docker Compose 部署 ftp

1)编写docker-compose.yml文件

[root@spark1 ftp]# vi docker-compose.yml version: '3.1' services: ftp: restart: always image: 10.1.119.12/gx/ftp:latest network_mode: "host" container_name: iot-ftp environment: PASV_MIN_PORT: 21100 PASV_MAX_PORT: 21110 PASV_ADDRESS: 172.19.161.40 FTP_USER: ftpuser FTP_PASS: 123456 ports: - "31020:20" - "31021:21" - "31100-31110:21100-21110" volumes: - ./vsftpd:/home/vsftpd

2)启动容器

[root@spark1 ftp]# docker-compose up -d 1.5、Docker Compose 部署 kafka

1)编写zookeeper的docker-compose.yml文件

[root@spark1 zookeeper]# vi docker-compose.yml version: '3.0' services: zoo1: image: zookeeper:3.5.9 restart: always ports: - 2181:2181 volumes: - ./zookeeper1/data:/data - ./zookeeper1/zoo-log:/datalog environment: ZOO_MY_ID: 1 ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181 zoo2: image: zookeeper:3.5.9 restart: always ports: - 2182:2181 volumes: - ./zookeeper2/data:/data - ./zookeeper2/zoo-log:/datalog environment: ZOO_MY_ID: 2 ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181 zoo3: image: zookeeper:3.5.9 restart: always ports: - 2183:2181 volumes: - ./zookeeper3/data:/data - ./zookeeper3/zoo-log:/datalog environment: ZOO_MY_ID: 3 ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181

2)启动zookeeper容器

[root@spark1 zookeeper]# docker-compose up -d

3)编写kafka的docker-compose.yml文件

[root@spark1 kafka]# vi docker-compose.yml version: '3.0' services: kafka1: image: kafka:0.11.0.1 ports: - "9092:9092" environment: KAFKA_BROKER_ID: 1 KAFKA_ADVERTISED_HOST_NAME: 172.16.119.11 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.16.119.11:9092 KAFKA_ZOOKEEPER_CONNECT: 172.16.119.11:2181,172.16.119.11:2182,172.16.119.11:2183 KAFKA_ADVERTISED_PORT: 9092 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 KAFKA_DELETE_TOPIC_ENABLE: true container_name: kafka1 volumes: - /etc/localtime:/etc/localtime:ro kafka2: image: kafka:0.11.0.1 ports: - "9093:9092" environment: KAFKA_BROKER_ID: 2 KAFKA_ADVERTISED_HOST_NAME: 172.16.119.11 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.16.119.11:9093 KAFKA_ZOOKEEPER_CONNECT: 172.16.119.11:2181,172.16.119.11:2182,172.16.119.11:2183 KAFKA_ADVERTISED_PORT: 9093 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 KAFKA_DELETE_TOPIC_ENABLE: true container_name: kafka2 volumes: - /etc/localtime:/etc/localtime:ro kafka3: image: kafka:0.11.0.1 ports: - "9094:9092" environment: KAFKA_BROKER_ID: 3 KAFKA_ADVERTISED_HOST_NAME: 172.16.119.11 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.16.119.11:9094 KAFKA_ZOOKEEPER_CONNECT: 172.16.119.11:2181,172.16.119.11:2182,172.16.119.11:2183 KAFKA_ADVERTISED_PORT: 9094 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 KAFKA_DELETE_TOPIC_ENABLE: true container_name: kafka3 volumes: - /etc/localtime:/etc/localtime:ro

4)启动kafka容器

[root@spark1 kafka]# docker-compose up -d 1.6、Docker Compose 部署 datax-web

1)编写datax-web的docker-compose.yml文件

[root@spark1 datax-web]# vi docker-compose.yml version: '3.0' services: dataxweb-admin: image: 10.1.119.12/gx/iot-datax-admin:latest network_mode: host restart: always container_name: "dataxweb-admin" environment: REGISTER: "true" server.port: "9527" MYSQL_USERNAME: "root" MYSQL_PASSWORD: "123456" MYSQL_IP_PORT: "172.16.117.171:3306" MYSQL_DB_NAME: "datax_web" command: [] dataxweb-executor: image: 10.1.119.12/gx/dataxweb/executor:iot network_mode: host restart: always container_name: "dataxweb-executor" depends_on: - dataxweb-admin environment: REGISTER: "true" DATAX_WEB_URL: "http://172.16.117.171:9527" #dataxweb-admin地址 command: []

2)启动datax-web容器

[root@spark1 kafka]# docker-compose up -d

注意: datax_web数据库,如果需要,可以找我。

1.7、Docker Compose 部署 nacos

1)编写nacos的docker-compose.yml文件

[root@spark1 nacos]# vi docker-compose.yml version: '2' services: nacos: image: nacos/nacos-server:latest container_name: nacos-standalone-mysql network_mode: "host" environment: PREFER_HOST_MODE: "hostname" MODE: "standalone" volumes: - ./application.properties:/home/nacos/conf/application.properties - ./standalone-logs/:/home/nacos/logs ports: - "8848:8848" restart: on-failure

nacos文件夹下建立:application.properties

server.contextPath=/nacos server.servlet.contextPath=/nacos server.port=8848 managementrics.export.elastic.enabled=false managementrics.export.influx.enabled=false server.tomcat.accesslog.enabled=true server.tomcat.accesslog.pattern=%h %l %u %t "%r" %s %b %D %{User-Agent}i server.tomcat.basedir= nacos.security.ignore.urls=/,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-fe/public/**,/v1/auth/login,/v1/console/health/**,/v1/cs/**,/v1/ns/**,/v1/cmdb/**,/actuator/**,/v1/console/server/** spring.datasource.platform=mysql db.num=1 db.url.0=jdbc:mysql://172.10.10.71:3306/ry-config?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true db.user=root db.password=cetc@2021

2)启动datax-web容器

[root@spark1 nacos]# docker-compose up -d 三、前端相关 1.1、nginx

1)编写nginx的docker-compose.yml文件

[root@spark1 nginx]# vi docker-compose.yml version: '3.0' services: nginx: restart: always image: nginx container_name: nginx ports: - 80:80 volumes: - ./nginx.conf:/etc/nginx/nginx.conf - ./log:/var/log/nginx - ./html:/usr/share/nginx/html

注意:nginx.conf需要根据自己的情况,进行修改。

nginx.conf

user root; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name localhost; client_max_body_size 500M; #charset koi8-r; #access_log logs/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } location /api/ { proxy_set_header Host $host; proxy_pass http://192.168.239.129:50200/; #add_header 'Access-Control-Allow-Origin' '*'; #add_header 'Access-Control-Allow-Credentials' 'true'; #add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS'; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } }

2)启动datax-web容器

[root@spark1 nginx]# docker-compose up -d

总结: docker-compose应用容器部署。都是以下三步骤:

(1)在自己确定的目录,新建对应容器的文件夹。

比如部署mysql,那么在自己确定的目录下,新建mysql文件夹。

(2)在新建的文件夹下,编写对应容器的docker-compose.yml文件。

(3)最后以docker-compose up -d 命令启动容器。

在这里插入图片描述之后的docker-compose应用容器会持续更新。

祝各位终有所成,收获满满!



【本文地址】

公司简介

联系我们

今日新闻

    推荐新闻

    专题文章
      CopyRight 2018-2019 实验室设备网 版权所有