gmnon.cn-疯狂蹂躏欧美一区二区精品,欧美精品久久久久a,高清在线视频日韩欧美,日韩免费av一区二区

站長資訊網(wǎng)
最全最豐富的資訊網(wǎng)站

CentOS 7.6 部署ELK日志分析系統(tǒng)步驟

記錄在CentOS 7.6下部署ELK日志分析系統(tǒng)的過程步驟,希望對大家有所幫助。

下載elasticsearch

創(chuàng)建elk用戶并授權(quán)
useradd elk
chown -R elk:elk /home/elk/elasticsearch
chown -R elk:elk /home/elk/elasticsearch1
chown -R elk:elk /home/elk/elasticsearch2
mkdir -p /home/eladata
mkdir -p /var/log/elk
chown -R elk:elk /home/eladata
chown -R elk:elk /var/log/elk

主節(jié)點master

elasticsearch解壓,修改配置文件
/home/elk/elasticsearch/config
[root@localhost config]# grep -v  “^#” elasticsearch.yml
cluster.name: my-application
node.name: node0
node.master: true
node.attr.rack: r1
node.max_local_storage_nodes: 3
path.data: /home/eladata
path.logs: /var/log/elk
http.cors.enabled: true
http.cors.allow-origin: “*”
network.host: 192.168.1.70
http.port: 9200
transport.tcp.port: 9301
discovery.zen.minimum_master_nodes: 1
cluster.initial_master_nodes: [“node0”]

手動啟動命令
su elk -l -c ‘/home/elk/elasticsearch/bin/elasticsearch -d’

啟動文件 elasticsearch.service
[root@localhost system]# pwd
/lib/systemd/system
[root@localhost system]# cat elasticsearch.service
[Unit]
Description=Elasticsearch
Documentation=http://www.elastic.co
Wants=network-online.target
After=network-online.target
[Service]
RuntimeDirectory=elasticsearch
PrivateTmp=true
Environment=ES_HOME=/home/elk/elasticsearch
Environment=ES_PATH_CONF=/home/elk/elasticsearch/config
Environment=PID_DIR=/var/run/elasticsearch
EnvironmentFile=-/etc/sysconfig/elasticsearch
WorkingDirectory=/home/elk/elasticsearch
User=elk
Group=elk
ExecStart=/home/elk/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid –quiet
StandardOutput=journal
StandardError=inherit
LimitNOFILE=65536
LimitNPROC=4096
LimitAS=infinity
LimitFSIZE=infinity
TimeoutStopSec=0
KillSignal=SIGTERM
KillMode=process
SendSIGKILL=no
SuccessExitStatus=143
[Install]
WantedBy=multi-user.target

[root@localhost system]#

Node1節(jié)點
/home/elk/elasticsearch1/config
[root@localhost config]# grep -v  “^#” elasticsearch.yml
cluster.name: my-application
node.name: node1
node.master: false
node.attr.rack: r1
node.max_local_storage_nodes: 3
path.data: /home/eladata
path.logs: /var/log/elk
http.cors.enabled: true
http.cors.allow-origin: “*”
network.host: 192.168.1.70
transport.tcp.port: 9303
http.port: 9302
discovery.zen.ping.unicast.hosts: [“192.168.1.70:9301”]
[root@localhost config]#

手動啟動命令
su elk -l -c ‘/home/elk/elasticsearch1/bin/elasticsearch1 -d’

啟動文件 elasticsearch1.service
[root@localhost system]# pwd
/lib/systemd/system
[root@localhost system]# cat elasticsearch1.service
[Unit]
Description=Elasticsearch
Documentation=http://www.elastic.co
Wants=network-online.target
After=network-online.target
[Service]
RuntimeDirectory=elasticsearch1
PrivateTmp=true
Environment=ES_HOME=/home/elk/elasticsearch1
Environment=ES_PATH_CONF=/home/elk/elasticsearch1/config
Environment=PID_DIR=/var/run/elasticsearch
EnvironmentFile=-/etc/sysconfig/elasticsearch
WorkingDirectory=/home/elk/elasticsearch
User=elk
Group=elk
ExecStart=/home/elk/elasticsearch1/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid –quiet
StandardOutput=journal
StandardError=inherit
LimitNOFILE=65536
LimitNPROC=4096
LimitAS=infinity
LimitFSIZE=infinity
TimeoutStopSec=0
KillSignal=SIGTERM
KillMode=process
SendSIGKILL=no
SuccessExitStatus=143
[Install]
WantedBy=multi-user.target

[root@localhost system]#

Node2節(jié)點
/home/elk/elasticsearch2/config
[root@localhost config]# grep -v  “^#” elasticsearch.yml
cluster.name: my-application
node.name: node2
node.attr.rack: r1
node.master: false
node.max_local_storage_nodes: 3
path.data: /home/eladata
path.logs: /var/log/elk
http.cors.enabled: true
http.cors.allow-origin: “*”
network.host: 192.168.1.70
http.port: 9203
transport.tcp.port: 9304
discovery.zen.ping.unicast.hosts: [“192.168.1.70:9301”]
discovery.zen.minimum_master_nodes: 1
[root@localhost config]#

手動啟動命令
su elk -l -c ‘/home/elk/elasticsearch2/bin/elasticsearch2 -d’

啟動文件 elasticsearch2.service
[root@localhost system]# pwd
/lib/systemd/system
[root@localhost system]# cat elasticsearch2.service
[Unit]
Description=Elasticsearch
Documentation=http://www.elastic.co
Wants=network-online.target
After=network-online.target
[Service]
RuntimeDirectory=elasticsearch2
PrivateTmp=true
Environment=ES_HOME=/home/elk/elasticsearch2
Environment=ES_PATH_CONF=/home/elk/elasticsearch2/config
Environment=PID_DIR=/var/run/elasticsearch
EnvironmentFile=-/etc/sysconfig/elasticsearch
WorkingDirectory=/home/elk/elasticsearch2
User=elk
Group=elk
ExecStart=/home/elk/elasticsearch2/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid –quiet
StandardOutput=journal
StandardError=inherit
LimitNOFILE=65536
LimitNPROC=4096
LimitAS=infinity
LimitFSIZE=infinity
TimeoutStopSec=0
KillSignal=SIGTERM
KillMode=process
SendSIGKILL=no
SuccessExitStatus=143
[Install]
WantedBy=multi-user.target

[root@localhost system]#

下載logstash

目錄如下,默認配置即可
[root@localhost logstash]# pwd
/home/elk/logstash
[root@localhost logstash]#

手動啟動命令
./logstash -f ../dev.conf
nohup ./logstash -f ../dev.conf &

下載kibana

配置文件如下
[root@localhost config]# pwd
/home/elk/kibana/config
[root@localhost config]# grep -v  “^#” kibana.yml
server.host: “192.168.1.70”
elasticsearch.hosts: [“http://192.168.1.70:9200”]
kibana.index: “.kibana”
i18n.locale: “zh-CN”

手動啟動命令
./kibana
nohup ./kibana &

kibana啟動文件
[root@localhost system]# pwd
/lib/systemd/system
[root@localhost system]# cat kibana.service
[Unit]
Description=Kibana  Server Manager
[Service]
ExecStart=/home/elk/kibana/bin/kibana
[Install]
WantedBy=multi-user.target
[root@localhost system]#

端口為:5601 訪問:192.168.1.70:5601

安裝Elasticsearch -head
yum install git npm
git clone https://github.com/mobz/elasticsearch-head.git
[root@localhost elasticsearch-head]# pwd
/home/elk/elasticsearch-head
[root@localhost elasticsearch-head]#

啟動
npm install
npm run start
nohup npm run start &

curl -XPUT ‘192.168.2.67:9100/book’

訪問192.168.2.67:9100 即可訪問

下載kafka

修改配置文件如下
[root@localhost config]# pwd
/home/elk/kafka/config
[root@localhost config]# grep -v “^#” server.properties
broker.id=0
listeners=PLAINTEXT://192.168.1.70:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/var/log/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
delete.topic.enable=true
[root@localhost config]#

kafka配置啟動zookeeper

手動啟動方式
[root@localhost bin]# pwd
/home/elk/kafka/bin
[root@localhost bin]#
./zookeeper-server-start.sh ../config/zookeeper.properties

systemctl 啟動zookeeper
[root@localhost system]# pwd
/lib/systemd/system
[root@localhost system]# cat zookeeper.service
[Service]
Type=forking
SyslogIdentifier=zookeeper
Restart=always
RestartSec=0s
ExecStart=/home/elk/kafka/bin/zookeeper-server-start.sh -daemon /home/elk/kafka/config/zookeeper.properties
ExecStop=/home/elk/kafka/bin/zookeeper-server-stop.sh
[root@localhost system]#

啟動kafka服務

手動啟動方式
./kafka-server-start.sh ../config/server.properties

systemctl 啟動kafka
[root@localhost system]# pwd
/lib/systemd/system
[root@localhost system]# cat kafka.service
[Unit]
Description=Apache kafka
After=network.target
[Service]
Type=simple
Restart=always
RestartSec=0s
ExecStart=/home/elk/kafka/bin/kafka-server-start.sh  /home/elk/kafka/config/server.properties
ExecStop=/home/elk/kafka/bin/kafka-server-stop.sh
[root@localhost system]#

測試kafka

新建一個名字為test的topic
/kafka-topics.sh –create –zookeeper 192.168.1.70:2181 –replication-factor 1 –partitions 1 –topic test

查看kafka中的topic
./kafka-topics.sh –list  –zookeeper 192.168.1.70:2181

往kafka topic為test中 生產(chǎn)消息
./kafka-console-producer.sh –broker-list 192.168.1.70:9092 –topic test

在kafka topic為test中 消費消息
bin/kafka-console-consumer.sh –bootstrap-server 192.168.1.70:9092 –topic test –from-beginning

生產(chǎn)的消息,消費那邊接受到即是ok的

目標機器安裝filebeat

安裝6.5版本的
[root@localhost filebeat]# pwd
/usr/local/filebeat
[root@localhost filebeat]# cat filebeat.yml
filebeat.prospectors:
– type: log
  paths:
    – /opt/logs/workphone-tcp/catalina.out
  fields:
    tag: 54_tcp_catalina_out
– type: log
  paths:
    – /opt/logs/workphone-webservice/catalina.out
  fields:
    tag: 54_web_catalina_out
name: 192.168.1.54
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 3
output.kafka:
  hosts: [“192.168.1.70:9092”]
  topic: “filebeat-log”
  partition.hash:
    reachable_only: true
  compression: gzip
  max_message_bytes: 1000000
  required_acks: 1

[root@localhost filebeat]#

安裝完成后去logstash編輯配置文件

logstash操作
[root@localhost logstash]# pwd
/home/elk/logstash
[root@localhost logstash]# cat dev.conf
input {
  kafka{
    bootstrap_servers => “192.168.1.70:9092”
    topics => [“filebeat-log”]
    codec => “json”
  }
}
filter {
        if [fields][tag]==”jpwebmap” {
            json{
                source => “message”
                remove_field => “message”
            }
            geoip {
            source => “client”
            target => “geoip”
            add_field => [ “[geoip][coordinates]”, “%{[geoip][longitude]}” ]
            add_field => [ “[geoip][coordinates]”, “%{[geoip][latitude]}”  ]
            }
            mutate {
                convert => [ “[geoip][coordinates]”, “float”]
                }
        }
    if [fields][tag] == “54_tcp_catalina_out”{
            grok {
                match => [“message”, “%{TIMESTAMP_ISO8601:logdate}”]
            }
            date {
                match => [“logdate”, “ISO8601”]
            }
            mutate {
                remove_field => [ “logdate” ]
            }
    }
    if [fields][tag] == “54_web_catalina_out”{
                grok {
                        match => [“message”, “%{TIMESTAMP_ISO8601:logdate}”]
                }
                date {
                        match => [“logdate”, “ISO8601”]
                }
                mutate {
                        remove_field => [ “logdate” ]
                }
        }
    if [fields][tag] == “55_tcp_catalina_out”{
                grok {
                        match => [“message”, “%{TIMESTAMP_ISO8601:logdate}”]
                }
                date {
                        match => [“logdate”, “ISO8601”]
                }
                mutate {
                        remove_field => [ “logdate” ]
                }
        }
        if [fields][tag] == “55_web_catalina_out”{
                grok {
                        match => [“message”, “%{TIMESTAMP_ISO8601:logdate}”]
                }
                date {
                        match => [“logdate”, “ISO8601”]
                }
                mutate {
                        remove_field => [ “logdate” ]
                }
        }
    if [fields][tag] == “51_nginx80_access_log” {
            mutate {
                add_field => { “spstr” => “%{[log][file][path]}” }
            }
            mutate {
                split => [“spstr” , “/”]
                # save the last element of the array as the api_method.
                add_field => [“src”, “%{[spstr][-1]}” ]
            }
            mutate{
                remove_field => [ “friends”, “ecs”, “agent” , “spstr” ]
            }
            grok {
                match => { “message” => “%{IPORHOST:remote_addr} – %{DATA:remote_user} [%{HTTPDATE:time}] “%{WORD:method} %{DATA:url} HTTP/%{NUMBER:http_version}” %{NUMBER:response_code} %{NUMBER:body_sent:bytes} “%{DATA:referrer}” “%{DATA:agent}” “%{DATA:x_forwarded_for}” “%{NUMBER:request_time}” “%{DATA:upstream_addr}” “%{DATA:upstream_status}”” }
                remove_field => “message”
            }
            date {
                    match => [“time”, “dd/MMM/yyyy:HH:mm:ss Z”]
                    target => “@timestamp”
            }
            geoip {
                source => “x_forwarded_for”
                target => “geoip”
                database => “/home/elk/logstash/GeoLite2-City.mmdb”
                add_field => [ “[geoip][coordinates]”, “%{[geoip][longitude]}” ]
                add_field => [ “[geoip][coordinates]”, “%{[geoip][latitude]}”  ]
            }
            mutate {
                convert => [ “[geoip][coordinates]”, “float”]
            }
    }
}
output {
if [fields][tag] == “wori”{
  elasticsearch {
  hosts => [“192.168.1.70:9200”]
  index => “zabbix”
      }
  }
if [fields][tag] == “54_tcp_catalina_out”{
  elasticsearch {
  hosts => [“192.168.1.70:9200”]
  index => “54_tcp_catalina_out”
      }
  }
if [fields][tag] == “54_web_catalina_out”{
  elasticsearch {
  hosts => [“192.168.1.70:9200”]
  index => “54_web_catalina_out”
      }
  }
if [fields][tag] == “55_tcp_catalina_out”{
  elasticsearch {
  hosts => [“192.168.1.70:9200”]
  index => “55_tcp_catalina_out”
      }
  } 
if [fields][tag] == “55_web_catalina_out”{
  elasticsearch {
  hosts => [“192.168.1.70:9200”]
  index => “55_web_catalina_out”
      }
  }
if [fields][tag] == “51_nginx80_access_log” {
      stdout{}
    elasticsearch {
    hosts => [“192.168.1.70:9200”]
    index => “51_nginx80_access_log”
    }
  }
}

其他的配置文件

index.conf
filter {
    mutate {
        add_field => { “spstr” => “%{[log][file][path]}” }
    }
        mutate {
        split => [“spstr” , “/”]
        # save the last element of the array as the api_method.
        add_field => [“src”, “%{[spstr][-1]}” ]
        }
        mutate{
    remove_field => [ “friends”, “ecs”, “agent” , “spstr” ]
    }
}

Java.conf
filter {
if [fields][tag] == “java”{
    grok {
        match => [“message”, “%{TIMESTAMP_ISO8601:logdate}”]
    }
    date {
        match => [“logdate”, “ISO8601”]
    }
    mutate {
        remove_field => [ “logdate” ]
    }
  } #End if
}

kafkainput.conf
input {
  kafka{
    bootstrap_servers => “172.16.11.68:9092”
    #topics => [“ql-prod-tomcat” ]
    topics => [“ql-prod-dubbo”,”ql-prod-nginx”,”ql-prod-tomcat” ]
    codec => “json”
    consumer_threads => 5
    decorate_events => true
    #auto_offset_reset => “latest”
    group_id => “logstash”
    #client_id => “”
    ############################# HELK Optimizing Latency #############################
    fetch_min_bytes => “1”
    request_timeout_ms => “305000”
    ############################# HELK Optimizing Availability #############################
    session_timeout_ms => “10000”
    max_poll_records => “550”
    max_poll_interval_ms => “300000”
  }

}
#input {
#  kafka{
#    bootstrap_servers => “172.16.11.68:9092”
#    topics => [“ql-prod-java-dubbo”,”ql-prod”,”ql-prod-java” ]
#    codec => “json”
#    consumer_threads => 15
#    decorate_events => true
#    auto_offset_reset => “latest”
#    group_id => “logstash-1”
#    ############################# HELK Optimizing Latency #############################
#    fetch_min_bytes => “1”
#    request_timeout_ms => “305000”
#    ############################# HELK Optimizing Availability #############################
#    session_timeout_ms => “10000”
#    max_poll_records => “550”
#    max_poll_interval_ms => “300000”
#  }

#}

nginx.conf
filter {
if [fields][tag] == “nginx-access” {
        mutate {
        add_field => { “spstr” => “%{[log][file][path]}” }
        }
        mutate {
        split => [“spstr” , “/”]
        # save the last element of the array as the api_method.
        add_field => [“src”, “%{[spstr][-1]}” ]
        }
        mutate{
        remove_field => [ “friends”, “ecs”, “agent” , “spstr” ]
        }

    grok {
        match => { “message” => “%{IPORHOST:remote_addr} – %{DATA:remote_user} [%{HTTPDATE:time}] “%{WORD:method} %{DATA:url} HTTP/%{NUMBER:http_version}” %{NUMBER:response_code} %{NUMBER:body_sent:bytes} “%{DATA:referrer}” “%{DATA:agent}” “%{DATA:x_forwarded_for}” “%{NUMBER:request_time}” “%{DATA:upstream_addr}” “%{DATA:upstream_status}”” }
        remove_field => “message”
    }
    date {
                match => [“time”, “dd/MMM/yyyy:HH:mm:ss Z”]
                target => “@timestamp”
        }
    geoip {
        source => “x_forwarded_for”
        target => “geoip”
        database => “/opt/logstash-6.2.4/GeoLite2-City.mmdb”
        add_field => [ “[geoip][coordinates]”, “%{[geoip][longitude]}” ]
        add_field => [ “[geoip][coordinates]”, “%{[geoip][latitude]}”  ]

        }
    mutate {
        convert => [ “[geoip][coordinates]”, “float”]
    }

  } #endif
}

ouput.conf
output{
  if [fields][tag] == “nginx-access” {
      stdout{}
    elasticsearch {
    user => elastic
    password => WR141bp2sveJuGFaD4oR
    hosts => [“172.16.11.67:9200”]
    index => “logstash-%{[fields][proname]}-%{+YYYY.MM.dd}”
    }
  }
      #stdout{}
  if [fields][tag] == “java” {
        elasticsearch {
        user => elastic
        password => WR141bp2sveJuGFaD4oR
        hosts => [“172.16.11.66:9200″,”172.16.11.68:9200”]
        index => “%{[host][name]}-%{[src]}”
        }
  }
}

贊(0)
分享到: 更多 (0)
?
網(wǎng)站地圖   滬ICP備18035694號-2    滬公網(wǎng)安備31011702889846號
gmnon.cn-疯狂蹂躏欧美一区二区精品,欧美精品久久久久a,高清在线视频日韩欧美,日韩免费av一区二区
亚洲理论中文字幕| 欧美精品一区免费| 熟女视频一区二区三区| 国产欧美123| 黄页免费在线观看视频| 日本三区在线观看| 超碰在线免费av| 久久久久久www| 老熟妇仑乱视频一区二区| 亚洲欧美aaa| 日韩在线视频在线| 一本久道综合色婷婷五月| 欧美一级特黄aaa| 美脚丝袜脚交一区二区| 国产成人精品视频ⅴa片软件竹菊| 欧美精品久久久久久久久25p| 青青草影院在线观看| 久草资源站在线观看| √天堂资源在线| 99精品人妻少妇一区二区 | 五月天六月丁香| 成人免费观看cn| 天美一区二区三区| 成人观看免费完整观看| 香蕉视频在线网址| 国产成人久久婷婷精品流白浆| eeuss中文| 欧美三级理论片| 激情伊人五月天| 日韩视频在线免费播放| 欧美精品无码一区二区三区| 成人一区二区av| 亚洲一区日韩精品| 亚洲国产精品久久久久爰色欲| 国产免费一区二区三区四在线播放| 欧美日韩在线不卡视频| 91黄色在线看| 成人免费看片视频在线观看| 国产精品v日韩精品v在线观看| 人妻夜夜添夜夜无码av| 男女h黄动漫啪啪无遮挡软件| 亚欧美在线观看| 日韩欧美在线免费观看视频| 日韩在线综合网| 老子影院午夜伦不卡大全| 2021狠狠干| 中国一级大黄大黄大色毛片| 日本中文字幕二区| 人人干人人干人人| 99视频在线视频| 国产又黄又猛视频| 国产精品人人妻人人爽人人牛| www.中文字幕在线| 久久久久久久久久久福利| 精品一区二区中文字幕| 欧美黑人经典片免费观看| 人人妻人人做人人爽| 成人免费网站入口| 日本丰满少妇xxxx| 18禁男女爽爽爽午夜网站免费| 免费国产黄色网址| www.com毛片| 国产又黄又猛视频| 美女在线视频一区二区 | 国产乱子伦精品视频| 亚洲av综合色区| 国产曰肥老太婆无遮挡| 777精品久无码人妻蜜桃| 你懂的av在线| 中文字幕第80页| 免费成年人高清视频| 国产欧美综合一区| 婷婷五月综合缴情在线视频| 免费成人午夜视频| 亚洲欧洲日本精品| 玖玖精品在线视频| 久久久999视频| 日本中文字幕精品—区二区| 男人的天堂成人| 乱妇乱女熟妇熟女网站| 五月婷婷六月丁香激情| 青青草原网站在线观看| 欧美视频在线播放一区| 视频在线观看免费高清| 九一免费在线观看| 日韩avxxx| a级网站在线观看| 欧美污视频网站| 超碰中文字幕在线观看| 永久免费黄色片| 国产精品乱码久久久久| 一区二区久久精品| 蜜桃传媒一区二区三区| 亚洲最大综合网| 性鲍视频在线观看| 亚洲精品少妇一区二区| 欧美黄网站在线观看| 精品国产鲁一鲁一区二区三区| 日韩激情视频一区二区| 在线免费视频一区| 久久久久久久午夜| 婷婷激情小说网| 国产精品免费成人| 成人免费视频91| 免费不卡av网站| 艹b视频在线观看| 动漫av网站免费观看| 亚洲国产精品女人| www.夜夜爽| 久久精品香蕉视频| 国产毛片视频网站| www.激情网| 天堂av.com| 亚洲老女人av| 久久精品.com| 久草免费福利在线| 黄色网zhan| 亚洲自拍第三页| 另类小说第一页| www..com日韩| 欧美日韩福利在线| 国产一区一区三区| 久久人人爽人人片| 国产精品久久久久久9999| 99视频在线视频| 激情综合网婷婷| 六月丁香婷婷在线| 国产aaa一级片| 六月丁香激情网| 日韩欧美亚洲天堂| 免费观看美女裸体网站| 欧美精品久久久久久久久久久| 激情五月五月婷婷| 黄色成人在线免费观看| 日韩精品手机在线观看| 色乱码一区二区三区熟女| 中文字幕一区二区三区四| 男人的天堂最新网址| 午夜一区二区视频| 在线视频一二区| 捷克做爰xxxⅹ性视频| 亚洲男人天堂2021| 免费看污污视频| 精品久久久久久无码中文野结衣| 国产va亚洲va在线va| 天天夜碰日日摸日日澡性色av| 免费观看精品视频| 亚洲天堂网一区| 午夜天堂在线视频| 久久男人资源站| 色综合av综合无码综合网站| 色诱视频在线观看| 亚洲黄色av片| 菠萝蜜视频在线观看入口| 久久综合色视频| 网站一区二区三区| 日本老太婆做爰视频| 97国产在线播放| 色婷婷.com| 国产成人亚洲综合无码| 18岁网站在线观看| 亚洲性图一区二区| av片在线免费| 欧美日韩亚洲自拍| 992tv快乐视频| 老熟妇仑乱视频一区二区| 日日夜夜精品视频免费观看| 国产日韩av网站| 欧美一级特黄a| 亚洲乱码日产精品bd在线观看| 色综合久久久久无码专区| 美女在线视频一区二区| 国产一级爱c视频| 三日本三级少妇三级99| 香港三级韩国三级日本三级| 久久黄色片网站| 日本三级免费网站| 色哟哟免费网站| 亚洲精品怡红院| 男女私大尺度视频| 一级全黄肉体裸体全过程| 精品国产成人av在线免| 六月婷婷激情综合| 操人视频免费看| 亚洲第一中文av| 久久久免费视频网站| 日本道在线视频| 中文字幕 欧美日韩| 国产一区二区网| 日韩在线视频在线| 五月天视频在线观看| 天美星空大象mv在线观看视频| r级无码视频在线观看| 亚洲第一页在线视频| jizzzz日本| 男女视频一区二区三区| 日韩中文字幕三区| 欧美,日韩,国产在线| wwwjizzjizzcom| 欧美一级黄色录像片|