标签搜索

目 录CONTENT

文章目录
Web

压测小心得--如何用java实现1秒1000请求的并发,linux又如何记录数据?

陈铭
2021-07-12 / 0 评论 / 0 点赞 / 221 阅读 / 1,645 字 / 正在检测是否收录...

压测需求

首先,apisix端设置了10个consumer,也就是用户。并且配置了/test*的uri匹配路由,也就是说所有以/test开头的uri都会被转发。并且转发时会根据传过来的token来鉴权,并且执行路由下配置的各类插件(数据库日志、熔断限流、请求响应重写等)。
在上述配置的的路由机制下,测试固定QPS下带随机用户token的apisix转发耗时(不包括上游的响应耗时),QPS分别是:100请求/秒、200请求/秒、500请求/秒、1000请求/秒。

思路设计

由于apisix是封装nginx的国产微服务网关,所以它记录的耗时基本是不会比nginx复杂的。首先apisix的日志也仅仅提供了两个时间:客户请求的时间和整个流程的耗时。所以我们需要压测的耗时就杂糅在整个流程耗时中,难以区分。所以,我将apisix的上游配置为另一个部署在同一容器的nginx(apisix和nginx的耗时这段就可以忽略不计),让nginx再转发到一个上游并记录整个流程的耗时。那么apisix的总耗时与nginx的总耗时之差就可以近似为apisix的转发耗时了。

并且,固定QPS要带随机token,那么我们可以预先生成10个token,用schedule线程池,根据QPS固定线程池的线程数,以每秒周期随机带上token去请求apisix。那么我们就可以进行固定QPS的压测了

而且,我们还要记录容器的CPU和内存负载。可以直接使用命令,然后处理log文件的字符串提取信息就好

# -d 记录top的周期(单位:秒);-n top记录多少个周期结束
# -p 仅记录某个线程id的信息;-b 输出到文件
# 下面这个命令就是1s记录一次并追加到top.log文件中,直到记录300次就结束
top -d 1 -p xxxx -n 300 -b > top.log

java代码实现

线程任务

线程任务配置在@Configuration类中

package com.CmJava.config;

import com.CmJava.util.RsaTokenUtil;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.http.HttpEntity;
import org.springframework.http.HttpHeaders;
import org.springframework.http.HttpMethod;
import org.springframework.http.ResponseEntity;
import org.springframework.util.MultiValueMap;
import org.springframework.web.client.RestTemplate;

import java.net.URI;
import java.net.URISyntaxException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Random;
import java.util.concurrent.Callable;

@org.springframework.context.annotation.Configuration
public class MyConfiguration {
    public static final Random RANDOM =new Random();

    @Autowired
    public RestTemplate restTemplate;


    public static List<HttpEntity> httpHeaders;
    static {
        httpHeaders = new ArrayList<>();
        addHeaders(httpHeaders);
    }
    public static int num=1;

    @Bean
    public RestTemplate getRestTemplate(){
        return new RestTemplate();
    }



    private static void addHeaders(List<HttpEntity> httpHeaders) {

        HttpHeaders requestHeaders = new HttpHeaders();
        requestHeaders.add("Authorization",RsaTokenUtil.jwt(1));
        HttpEntity<MultiValueMap> requestEntity = new HttpEntity<MultiValueMap>(requestHeaders);
        httpHeaders.add(requestEntity);

        requestHeaders = new HttpHeaders();
        requestHeaders.add("Authorization",RsaTokenUtil.jwt(2));
        requestEntity = new HttpEntity<MultiValueMap>(requestHeaders);
        httpHeaders.add(requestEntity);

        requestHeaders = new HttpHeaders();
        requestHeaders.add("Authorization",RsaTokenUtil.jwt(3));
        requestEntity = new HttpEntity<MultiValueMap>(requestHeaders);
        httpHeaders.add(requestEntity);


        requestHeaders = new HttpHeaders();
        requestHeaders.add("Authorization",RsaTokenUtil.jwt(4));
        requestEntity = new HttpEntity<MultiValueMap>(requestHeaders);
        httpHeaders.add(requestEntity);


        requestHeaders = new HttpHeaders();
        requestHeaders.add("Authorization",RsaTokenUtil.jwt(5));
        requestEntity = new HttpEntity<MultiValueMap>(requestHeaders);
        httpHeaders.add(requestEntity);


        requestHeaders = new HttpHeaders();
        requestHeaders.add("Authorization",RsaTokenUtil.jwt(6));
        requestEntity = new HttpEntity<MultiValueMap>(requestHeaders);
        httpHeaders.add(requestEntity);


        requestHeaders = new HttpHeaders();
        requestHeaders.add("Authorization",RsaTokenUtil.jwt(7));
        requestEntity = new HttpEntity<MultiValueMap>(requestHeaders);
        httpHeaders.add(requestEntity);


        requestHeaders = new HttpHeaders();
        requestHeaders.add("Authorization",RsaTokenUtil.jwt(8));
        requestEntity = new HttpEntity<MultiValueMap>(requestHeaders);
        httpHeaders.add(requestEntity);


        requestHeaders = new HttpHeaders();
        requestHeaders.add("Authorization",RsaTokenUtil.jwt(9));
        requestEntity = new HttpEntity<MultiValueMap>(requestHeaders);
        httpHeaders.add(requestEntity);


        requestHeaders = new HttpHeaders();
        requestHeaders.add("Authorization",RsaTokenUtil.jwt(10));
        requestEntity = new HttpEntity<MultiValueMap>(requestHeaders);
        httpHeaders.add(requestEntity);
    }


    @Bean
    public Runnable getTask(){
        return new Runnable() {
            @Override
            public void run() {
                int i = RANDOM.nextInt(1000) + 1;
                HttpEntity httpEntity = MyConfiguration.httpHeaders.get(MyConfiguration.RANDOM.nextInt(MyConfiguration.httpHeaders.size()));
                try {
                    ResponseEntity<String> responseEntity = restTemplate.exchange(new URI("http://159.75.26.246:9888/test" + i), HttpMethod.GET, httpEntity, String.class);
                    System.out.println(responseEntity.getBody());
                } catch (URISyntaxException e) {
                    e.printStackTrace();
                }
            }
        };
    }

}

controller

执行线程池本质上还是一个web应用得到请求后,执行线程池的线程方法

@Controller
public class ConsumerController {

    @Autowired
    public Runnable runnable;

    @Autowired
    public RestTemplate restTemplate;




    public static final ScheduledExecutorService thread1000 = Executors.newScheduledThreadPool(1000);
    public static final ScheduledExecutorService  thread500 = Executors.newScheduledThreadPool(500);
    public static final ScheduledExecutorService  thread200 = Executors.newScheduledThreadPool(200);
    public static final ScheduledExecutorService  thread100 = Executors.newScheduledThreadPool(100);



    @RequestMapping("/test_1000")
    public void test1000() throws InterruptedException {
        for (int i = 1; i < 1000; i++) {
            thread1000.scheduleAtFixedRate(new MyTask(restTemplate,i,1000),1,1,TimeUnit.SECONDS);
        }
        Thread.sleep(1000*60*5);
        thread1000.shutdown();
        System.out.println("stop!!!");
    }

    @RequestMapping("/test_500")
    public void test500() throws InterruptedException {
        for (int i = 1; i < 500; i++) {
            thread500.scheduleAtFixedRate(new MyTask(restTemplate,i,500),1,1,TimeUnit.SECONDS);
        }
        Thread.sleep(1000*60*5);
        thread500.shutdown();
        System.out.println("stop!!!");
    }



    @RequestMapping("/test_200")
    public void test200() throws InterruptedException {
        for (int i = 1; i < 200; i++) {
            thread200.scheduleAtFixedRate(new MyTask(restTemplate,i,200),1,1,TimeUnit.SECONDS);
        }
        Thread.sleep(1000*60*5);
        thread200.shutdown();
        System.out.println("stop!!!");
    }

    @RequestMapping("/test_100")
    public void test100() throws InterruptedException {
        for (int i = 1; i < 100; i++) {
            thread100.scheduleAtFixedRate(new MyTask(restTemplate,i,100),0,1,TimeUnit.SECONDS);
        }
        Thread.sleep(1000*60*5);
        thread100.shutdown();
        System.out.println("stop!!!");
    }
}

测试结果

处理数据过程就不再赘述了,apisix我存在postgresql数据库里面,用jdbc取出来;ngx日志存在nginx目录下的logs文件夹中的access.log中。两个数据提取同样uri的日志进行耗时相减就行
首先说明的是,这个方法有些弊端。因为理论上apisix的耗时一定会比ngx耗时大,但是两个网关属于不同进程,记录时间时可能并不同步。所以会出现ngx耗时大于apisix的情况,当然这是少数不碍事,毕竟压测的结果也是个用来参考的统计值罢了。

100请求/秒

apisix转发耗时
image
耗时分布占比
image

CPU和内存
image

90%线(不考虑负值) 90%线(考虑负值)
4.99ms 4.00ms

200请求/秒

apisix转发耗时
image

耗时分布占比
image

CPU和内存
image

90%线(不考虑负值) 90%线(考虑负值)
4.99ms 4.00ms

500请求/秒

apisix转发耗时
image

耗时分布占比
image

CPU和内存
image

90%线(不考虑负值) 90%线(考虑负值)
6.00ms 5.99ms

1000请求/秒

apisix转发耗时
image

耗时分布占比
image

CPU和内存
image

90%线(不考虑负值) 90%线(考虑负值)
9.0ms 6.99ms

结论

总体上看,四种负载的90%耗时都不超过10ms,并且CPU足够稳定,内存上升也并不剧烈,性能是足够在生产环境下使用的。

0

评论区