%%
Related:
- [[k6 Cloud]]
- [[k6 Test Builder]]
- [[k6 CLI]]
- [[k6 (tool)]]
%%
# How to use k6 - a walkthrough of k6 Cloud
## The video
<iframe width="560" height="315" src="https://www.youtube.com/embed/nwDI5k3gUIY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
## Timestamps
0:00 Intro
0:57 The k6 Cloud GUI
12:50 How to create a test script on k6 Cloud: Test Builder and Script Editor
23:28 Scripting in local IDE
30:22 How to use the k6 CLI
34:48 Metrics, visualization, and analysis
## Transcript
1
00:00:00,160 --> 00:00:01,107
- My name is Bill Rainaud,
2
00:00:01,107 --> 00:00:04,270
I'm the Inside Sales Exec with k6.
3
00:00:04,270 --> 00:00:06,280
Happy to talk to everybody today,
4
00:00:06,280 --> 00:00:08,370
I'm looking forward to
the structured walkthrough
5
00:00:08,370 --> 00:00:10,210
that we have for all of our guests.
6
00:00:10,210 --> 00:00:12,460
So, I'm gonna go ahead and
start sharing my screen,
7
00:00:12,460 --> 00:00:13,610
and then I'll take everybody
8
00:00:13,610 --> 00:00:15,090
through the standard walkthrough
9
00:00:15,090 --> 00:00:17,420
that I provide with the k6 Cloud.
10
00:00:17,420 --> 00:00:19,500
Here's how we're going to
structure it for today,
11
00:00:19,500 --> 00:00:22,270
we are going to cover the k6 Cloud GUI.
12
00:00:22,270 --> 00:00:24,330
From there, we'll move
into test authoring,
13
00:00:24,330 --> 00:00:26,600
I'll show how the test builder works
14
00:00:26,600 --> 00:00:29,060
in addition to our script editor.
15
00:00:29,060 --> 00:00:29,893
From there,
16
00:00:29,893 --> 00:00:31,540
we'll move over to the local ID,
17
00:00:31,540 --> 00:00:33,500
we'll show you an engineer test script
18
00:00:33,500 --> 00:00:34,730
that we have available.
19
00:00:34,730 --> 00:00:37,530
We'll walk through that,
show you some tips and tricks
20
00:00:37,530 --> 00:00:39,920
that you can include inside
of your own test scripts,
21
00:00:39,920 --> 00:00:41,580
or even optimize those.
22
00:00:41,580 --> 00:00:42,600
And then finally,
23
00:00:42,600 --> 00:00:45,450
we'll open up with the k6 CLI,
24
00:00:45,450 --> 00:00:48,070
show you how we can bridge
the k6 Open Source Solution
25
00:00:48,070 --> 00:00:50,100
into the k6 Cloud.
26
00:00:50,100 --> 00:00:51,700
And then finally, we'll be wrapping up
27
00:00:51,700 --> 00:00:54,630
with metrics, visualization and analysis
28
00:00:54,630 --> 00:00:56,303
with inside of the k6 Cloud.
29
00:00:57,490 --> 00:01:00,680
So, starting us off here,
we're logged into the k6 Cloud,
30
00:01:00,680 --> 00:01:03,540
we're immediately with
our project dashboard.
31
00:01:03,540 --> 00:01:04,373
Taking a look,
32
00:01:04,373 --> 00:01:07,920
all of my or tests are
organized into cards here.
33
00:01:07,920 --> 00:01:11,110
And picking on my Insights
Demo with Cloud Execution test,
34
00:01:11,110 --> 00:01:13,330
I can see some high level
metrics about my test,
35
00:01:13,330 --> 00:01:17,540
such as the last execution
time, the average response time,
36
00:01:17,540 --> 00:01:19,420
the total number of virtual users,
37
00:01:19,420 --> 00:01:21,770
and the total test duration of my test.
38
00:01:21,770 --> 00:01:23,980
Coming to out to these
individualized test runs,
39
00:01:23,980 --> 00:01:25,220
if I hover over those,
40
00:01:25,220 --> 00:01:28,420
I can visualize the 95th percentile metric
41
00:01:28,420 --> 00:01:30,010
generated in milliseconds,
42
00:01:30,010 --> 00:01:33,430
as well as the test
execution date and time.
43
00:01:33,430 --> 00:01:34,270
At a high level here,
44
00:01:34,270 --> 00:01:35,570
I can see that most of my tests
45
00:01:35,570 --> 00:01:38,920
are taking approximately 69
milliseconds to complete,
46
00:01:38,920 --> 00:01:40,370
thus I can infer that there are
47
00:01:40,370 --> 00:01:43,570
no major performance
regressions at a high level.
48
00:01:43,570 --> 00:01:44,840
So, if we were to compare this to,
49
00:01:44,840 --> 00:01:47,700
let's say our Crocodile
API test that we have here,
50
00:01:47,700 --> 00:01:49,700
we can see that our initial test
51
00:01:49,700 --> 00:01:51,700
took about 4,000 milliseconds,
52
00:01:51,700 --> 00:01:54,860
and as performance
started to scale up here,
53
00:01:54,860 --> 00:01:57,660
and there were introductions
of new features maybe here,
54
00:01:57,660 --> 00:02:01,030
we saw that performance trended downwards,
55
00:02:01,030 --> 00:02:03,020
and we have this major
performance regression
56
00:02:03,020 --> 00:02:04,290
where performance is taking
57
00:02:04,290 --> 00:02:07,840
about 10 times the amount of
time to execute this test,
58
00:02:07,840 --> 00:02:11,090
according to the 95th percentile.
59
00:02:11,090 --> 00:02:11,923
So here,
60
00:02:11,923 --> 00:02:13,900
I can infer that there's a
major performance regression,
61
00:02:13,900 --> 00:02:17,363
and that I need to do some
remediation behind this test.
62
00:02:18,350 --> 00:02:20,120
Now, changing gears a little bit,
63
00:02:20,120 --> 00:02:23,750
we've included numerous ease
of use and convenience features
64
00:02:23,750 --> 00:02:25,210
with inside of the k6 Cloud
65
00:02:25,210 --> 00:02:27,490
to make everyone's life
a little bit easier.
66
00:02:27,490 --> 00:02:30,050
I imagine everybody watching for today
67
00:02:30,050 --> 00:02:31,960
has used a search bar at some point.
68
00:02:31,960 --> 00:02:32,930
We type in "demo,"
69
00:02:32,930 --> 00:02:35,530
and we can quickly find
our Insights Demo test,
70
00:02:35,530 --> 00:02:36,363
our Simple Demo,
71
00:02:36,363 --> 00:02:38,570
or our Browser demo test.
72
00:02:38,570 --> 00:02:39,760
In conjunction to this,
73
00:02:39,760 --> 00:02:42,020
we have the ability to sort our test cards
74
00:02:42,020 --> 00:02:44,120
by the last test run,
when they were created,
75
00:02:44,120 --> 00:02:46,950
or specified name to our test.
76
00:02:46,950 --> 00:02:48,410
Now, coming up to the top here,
77
00:02:48,410 --> 00:02:50,870
we have our k6 project folder,
78
00:02:50,870 --> 00:02:52,830
we have our unique project ID.
79
00:02:52,830 --> 00:02:53,950
If I hop around,
80
00:02:53,950 --> 00:02:57,030
let's say I change to a
k6 development folder,
81
00:02:57,030 --> 00:02:59,090
my project ID immediately updates,
82
00:02:59,090 --> 00:03:02,470
as well as the cards with
inside of that project folder.
83
00:03:02,470 --> 00:03:05,140
The project ID is essentially how the test
84
00:03:05,140 --> 00:03:07,680
will be aggregated with
inside of the k6 Cloud,
85
00:03:07,680 --> 00:03:09,320
and how those subsequent test runs
86
00:03:09,320 --> 00:03:12,210
will be associated with those tests.
87
00:03:12,210 --> 00:03:14,170
Now, changing gears a
little bit more here,
88
00:03:14,170 --> 00:03:16,070
if we come to the top left hand corner,
89
00:03:16,070 --> 00:03:17,970
we have this drop down arrow.
90
00:03:17,970 --> 00:03:18,803
If I click it,
91
00:03:18,803 --> 00:03:21,160
I can visualize my
individual account settings.
92
00:03:21,160 --> 00:03:23,330
Most notably is an API token,
93
00:03:23,330 --> 00:03:26,440
which we can use to authenticate
our k6 Cloud account
94
00:03:26,440 --> 00:03:28,440
via the k6 CLI.
95
00:03:28,440 --> 00:03:30,810
And we do have some
organizational settings,
96
00:03:30,810 --> 00:03:33,340
as this is an administrative account.
97
00:03:33,340 --> 00:03:34,980
Most notably with inside of here
98
00:03:34,980 --> 00:03:36,690
is your members tab.
99
00:03:36,690 --> 00:03:39,280
This is how you're
going to add new members
100
00:03:39,280 --> 00:03:40,940
to your subscription.
101
00:03:40,940 --> 00:03:45,210
And so we're going to invite
new members to this org.
102
00:03:45,210 --> 00:03:48,090
I'm gonna pick on Nicole,
because she's the host for today.
103
00:03:48,090 --> 00:03:49,480
And let's say, hypothetically,
104
00:03:49,480 --> 00:03:53,763
we have Nicole as a new
junior developer or QA.
105
00:03:55,410 --> 00:03:57,760
So, we're gonna set her
up as a project member,
106
00:03:57,760 --> 00:04:02,670
and let's say we wanna give
her access to the k6 project.
107
00:04:02,670 --> 00:04:04,280
So we can come down,
108
00:04:04,280 --> 00:04:06,370
we see our k6 project here,
109
00:04:06,370 --> 00:04:07,690
we select that.
110
00:04:07,690 --> 00:04:10,520
And then we click the
SEND INVITATION button.
111
00:04:10,520 --> 00:04:12,770
Immediately after
submitting that invitation,
112
00:04:12,770 --> 00:04:14,290
Nicole will receive an email,
113
00:04:14,290 --> 00:04:17,090
she's able to get started
on the k6 project folder,
114
00:04:17,090 --> 00:04:18,560
and we have peace of mind
115
00:04:18,560 --> 00:04:19,970
knowing that she doesn't have access
116
00:04:19,970 --> 00:04:23,143
to maybe our Grafana demo
or our e-commerce site.
117
00:04:24,050 --> 00:04:25,990
I'll call out here in addition,
118
00:04:25,990 --> 00:04:28,620
so we have no limitation
on the amount of users
119
00:04:28,620 --> 00:04:31,870
that you can add to a
k6 Cloud subscription,
120
00:04:31,870 --> 00:04:33,560
this increases collaboration,
121
00:04:33,560 --> 00:04:37,140
as well as decreases cost overall.
122
00:04:37,140 --> 00:04:38,710
I know it's a key differentiator
123
00:04:38,710 --> 00:04:40,770
between k6 and some of the market leaders
124
00:04:40,770 --> 00:04:43,360
that have per seed licensing.
125
00:04:43,360 --> 00:04:44,230
In addition to this,
126
00:04:44,230 --> 00:04:46,990
so we have numerous
integrations for SAML SSO,
127
00:04:46,990 --> 00:04:51,770
we currently support Okta,
as well as Azure AD natively,
128
00:04:51,770 --> 00:04:53,610
and we've currently opened up support
129
00:04:53,610 --> 00:04:55,340
for SAML 2.0 providers,
130
00:04:55,340 --> 00:04:58,100
so as long as they
follow the 2.0 protocol,
131
00:04:58,100 --> 00:05:01,313
we should be able to
support those integrations.
132
00:05:02,680 --> 00:05:04,620
Now, changing things up a little bit,
133
00:05:04,620 --> 00:05:06,640
we're gonna come over to
the left hand side here
134
00:05:06,640 --> 00:05:09,970
and take a look at our projects
based foldering system.
135
00:05:09,970 --> 00:05:10,930
With inside of here,
136
00:05:10,930 --> 00:05:13,670
we are allowed to set up
additional project folders
137
00:05:13,670 --> 00:05:16,170
where we can store our test cards.
138
00:05:16,170 --> 00:05:18,840
We've seen clients break
this up in a number of ways
139
00:05:18,840 --> 00:05:21,600
based on the web application
that's being tested,
140
00:05:21,600 --> 00:05:23,850
the project, the development environment,
141
00:05:23,850 --> 00:05:27,660
whether that be staging,
pre-production, production,
142
00:05:27,660 --> 00:05:29,910
all the way up to development,
143
00:05:29,910 --> 00:05:33,510
or even across the specific
protocol that's being tested,
144
00:05:33,510 --> 00:05:37,930
whether that's HTTP,
HTTPS, web socket, or GRPC.
145
00:05:37,930 --> 00:05:38,950
At the end of the day,
146
00:05:38,950 --> 00:05:40,590
it's just a foldering based system,
147
00:05:40,590 --> 00:05:43,740
and we encourage all of our
clients to use it as such.
148
00:05:43,740 --> 00:05:44,920
Now, I will call out here
149
00:05:44,920 --> 00:05:47,400
that there's no limitation
on the amount of projects
150
00:05:47,400 --> 00:05:50,423
that you can create with inside
of your k6 Cloud account.
151
00:05:52,080 --> 00:05:52,960
Coming down,
152
00:05:52,960 --> 00:05:54,370
starting with our managed section,
153
00:05:54,370 --> 00:05:56,410
we have numerous features
with inside of here
154
00:05:56,410 --> 00:05:59,110
that make analysis and integrations
155
00:05:59,110 --> 00:06:01,520
a lot easier with inside of the k6 Cloud.
156
00:06:01,520 --> 00:06:04,180
Starting off with our threshold section.
157
00:06:04,180 --> 00:06:05,830
At a high level with inside of here,
158
00:06:05,830 --> 00:06:07,710
you can monitor all the thresholds
159
00:06:07,710 --> 00:06:09,870
that are included with
inside of your test.
160
00:06:09,870 --> 00:06:12,530
We can sort by a specified project,
161
00:06:12,530 --> 00:06:14,310
the status of our threshold,
162
00:06:14,310 --> 00:06:17,063
or even a defined period of time.
163
00:06:17,990 --> 00:06:18,823
At a high level,
164
00:06:18,823 --> 00:06:21,120
we can see the total
thresholds that we've created,
165
00:06:21,120 --> 00:06:22,460
total failed thresholds,
166
00:06:22,460 --> 00:06:25,620
and even the failure percentage rate.
167
00:06:25,620 --> 00:06:28,100
If I were to come down, we
have links to our projects,
168
00:06:28,100 --> 00:06:31,630
as well as the tests that
house those thresholds,
169
00:06:31,630 --> 00:06:33,400
and with inside of the history section,
170
00:06:33,400 --> 00:06:36,160
I can monitor the regressions
of these thresholds.
171
00:06:36,160 --> 00:06:39,470
Taking a look at the check
failure rating that I have here,
172
00:06:39,470 --> 00:06:43,340
I can see that this is consistently
failing across my tests,
173
00:06:43,340 --> 00:06:47,540
so either I set this up
to consistently fail,
174
00:06:47,540 --> 00:06:49,700
or there's a major
performance regression here
175
00:06:49,700 --> 00:06:51,933
that I need to do some remediation around.
176
00:06:53,010 --> 00:06:53,843
Moving down to our --
177
00:06:53,843 --> 00:06:55,760
- [Nicole] That was a
new feature, wasn't it?
178
00:06:55,760 --> 00:06:56,593
- [Bill] Yes it was.
179
00:06:56,593 --> 00:06:58,140
- [Nicole] I think we
released that a few weeks ago?
180
00:06:58,140 --> 00:06:59,350
- Yep, absolutely.
181
00:06:59,350 --> 00:07:00,650
It's one of the newest features
182
00:07:00,650 --> 00:07:03,910
that we've integrated with
inside of the k6 Cloud.
183
00:07:03,910 --> 00:07:06,250
Both of our front end and back teams
184
00:07:06,250 --> 00:07:09,150
diligently worked on this
feature to get it integrated
185
00:07:09,150 --> 00:07:11,450
within the most recent
release that we've had.
186
00:07:13,790 --> 00:07:16,040
Cool, so picking up here,
187
00:07:16,040 --> 00:07:18,200
we also have our test scheduler.
188
00:07:18,200 --> 00:07:19,580
So, if you're not looking to integrate
189
00:07:19,580 --> 00:07:21,780
with inside of a CICD pipeline,
190
00:07:21,780 --> 00:07:24,480
the test scheduler allows
you to define a schedule
191
00:07:24,480 --> 00:07:27,610
for any one of your tests
with inside of the k6 Cloud.
192
00:07:27,610 --> 00:07:29,960
Extremely beneficial if
you're looking to schedule
193
00:07:29,960 --> 00:07:33,860
maybe a one off schedule
for one particular test,
194
00:07:33,860 --> 00:07:36,200
and maybe you wanna run that
at three o'clock in the morning
195
00:07:36,200 --> 00:07:38,220
when you know that no
users are with inside
196
00:07:38,220 --> 00:07:40,520
of the web application
you're testing against,
197
00:07:40,520 --> 00:07:41,970
you can define a schedule
198
00:07:41,970 --> 00:07:44,430
with this ADD SCHEDULE button here.
199
00:07:44,430 --> 00:07:45,400
And with that schedule,
200
00:07:45,400 --> 00:07:48,110
we can see the activity in
the first run, the next run,
201
00:07:48,110 --> 00:07:50,540
frequency, when that frequency ends.
202
00:07:50,540 --> 00:07:52,550
We have a link to the specific test,
203
00:07:52,550 --> 00:07:54,140
as well as an editable button
204
00:07:54,140 --> 00:07:56,163
if you need to change that schedule.
205
00:07:57,570 --> 00:07:59,470
Coming down to our notifications.
206
00:07:59,470 --> 00:08:02,140
So, we have the ability
to configure notifications
207
00:08:02,140 --> 00:08:04,620
from with inside of the k6 Cloud web app.
208
00:08:04,620 --> 00:08:07,540
If I come over to this CREATE
NEW NOTIFICATION button,
209
00:08:07,540 --> 00:08:10,440
you'll see we have native
integrations with inside of Slack,
210
00:08:10,440 --> 00:08:11,680
Microsoft Teams,
211
00:08:11,680 --> 00:08:14,500
as well as the ability to
create a custom web hook,
212
00:08:14,500 --> 00:08:16,253
or even an email template.
213
00:08:17,180 --> 00:08:18,640
If I click this CREATE button,
214
00:08:18,640 --> 00:08:19,870
you'll see some of the parameters
215
00:08:19,870 --> 00:08:22,540
that we can pass as part of
our web hook or email template,
216
00:08:22,540 --> 00:08:25,560
such as the test
organization ID, project ID,
217
00:08:25,560 --> 00:08:27,780
test ID, URL, et cetera.
218
00:08:27,780 --> 00:08:29,520
And to show you some of the triggers,
219
00:08:29,520 --> 00:08:32,480
we can trigger a notification
based on the test starting,
220
00:08:32,480 --> 00:08:34,460
finishing, failing, timing out,
221
00:08:34,460 --> 00:08:36,493
or aborting by a number of conditions.
222
00:08:38,200 --> 00:08:40,420
Now, coming over to our Cloud APM section,
223
00:08:40,420 --> 00:08:42,650
this is a new feature
that we've integrated
224
00:08:42,650 --> 00:08:44,800
with inside of the k6 Cloud.
225
00:08:44,800 --> 00:08:45,633
With inside of here,
226
00:08:45,633 --> 00:08:47,450
you can create new configurations
227
00:08:47,450 --> 00:08:49,390
for any of your Cloud APM tools,
228
00:08:49,390 --> 00:08:51,870
whether that be Azure Monitor, Datadog,
229
00:08:51,870 --> 00:08:54,003
Grafana Cloud, or even New Relic.
230
00:08:56,160 --> 00:08:59,470
So with that, we're gonna come
on down to our saved tests.
231
00:08:59,470 --> 00:09:00,950
So, these are tests that are excluded
232
00:09:00,950 --> 00:09:02,760
from our data retention policy
233
00:09:02,760 --> 00:09:05,460
depending on the subscription
being considered,
234
00:09:05,460 --> 00:09:07,403
or maybe your testing requirements.
235
00:09:08,560 --> 00:09:10,850
We have the ability to save test results
236
00:09:10,850 --> 00:09:13,430
based on a specific duration,
237
00:09:13,430 --> 00:09:16,250
that duration could be
either 30 or 90 days
238
00:09:16,250 --> 00:09:19,150
if your looking to go with
one of the self-service plans.
239
00:09:19,150 --> 00:09:19,983
Additionally,
240
00:09:19,983 --> 00:09:22,410
we do have the ability to
configure custom data retention
241
00:09:22,410 --> 00:09:25,580
that can be associated with
your k6 Cloud subscription.
242
00:09:25,580 --> 00:09:27,080
This is extremely beneficial
243
00:09:27,080 --> 00:09:29,520
if your organization has
a compliance requirement
244
00:09:29,520 --> 00:09:32,070
that mandates you hold
onto your performance data
245
00:09:32,070 --> 00:09:34,150
for a given period of time.
246
00:09:34,150 --> 00:09:36,900
So with that, we can configure
custom data retention
247
00:09:36,900 --> 00:09:40,290
attached to your k6 Cloud subscription.
248
00:09:40,290 --> 00:09:41,710
Last, but not least,
249
00:09:41,710 --> 00:09:45,580
with inside of the MANAGE
section, we have Usage reports.
250
00:09:45,580 --> 00:09:46,840
So, with inside of here,
251
00:09:46,840 --> 00:09:49,020
you can gain a comprehensive overview
252
00:09:49,020 --> 00:09:50,440
of all the usage metrics
253
00:09:50,440 --> 00:09:54,270
that are being generated behind
your k6 Cloud subscription.
254
00:09:54,270 --> 00:09:56,570
We can sort by a specified project,
255
00:09:56,570 --> 00:09:58,650
a contributor to one of our projects,
256
00:09:58,650 --> 00:10:01,210
or even a defined period of time.
257
00:10:01,210 --> 00:10:02,310
With inside of here,
258
00:10:02,310 --> 00:10:04,690
as mentioned, we can gain
a comprehensive overview
259
00:10:04,690 --> 00:10:05,930
of all the usage metrics.
260
00:10:05,930 --> 00:10:08,530
We have links to the projects,
261
00:10:08,530 --> 00:10:11,020
as well as the most recent
tests that have been run
262
00:10:11,020 --> 00:10:12,583
with inside of the k6 Cloud.
263
00:10:13,580 --> 00:10:15,280
And this was actually brought to us
264
00:10:15,280 --> 00:10:17,500
by one of our enterprise clients,
265
00:10:17,500 --> 00:10:18,570
they had mentioned the need
266
00:10:18,570 --> 00:10:21,420
to see their usage metrics
in a meaningful way,
267
00:10:21,420 --> 00:10:23,690
so we did incorporate this feature,
268
00:10:23,690 --> 00:10:25,080
and so with that data,
269
00:10:25,080 --> 00:10:30,080
they were able to generate
a predictable forecast
270
00:10:30,450 --> 00:10:33,310
for their load and
performance testing times
271
00:10:33,310 --> 00:10:35,513
with inside of their development cycle.
272
00:10:37,070 --> 00:10:38,840
So, at this stage,
273
00:10:38,840 --> 00:10:40,230
we have our usage reports,
274
00:10:40,230 --> 00:10:42,190
and we're gonna change gear a little bit.
275
00:10:42,190 --> 00:10:44,520
We're gonna come down
to our EXPLORE section
276
00:10:44,520 --> 00:10:46,150
and pick up here.
277
00:10:46,150 --> 00:10:50,350
So, k6 has numerous open
source integrations and tools
278
00:10:50,350 --> 00:10:52,320
that could be leveraged in tandem
279
00:10:52,320 --> 00:10:55,400
to both the k6 open source software
280
00:10:55,400 --> 00:10:57,870
and the k6 Cloud.
281
00:10:57,870 --> 00:11:00,320
Starting first with our
HAR to k6 converter,
282
00:11:00,320 --> 00:11:03,930
this is associated with a
public GitHub repository
283
00:11:03,930 --> 00:11:06,420
where the solution can be
cloned to your local machine
284
00:11:06,420 --> 00:11:08,060
and leveraged from there.
285
00:11:08,060 --> 00:11:10,930
And if you've used Chrome's
development tool set in the past
286
00:11:10,930 --> 00:11:12,190
to record a user journey,
287
00:11:12,190 --> 00:11:15,270
and you did capture the
HTTP archive recording,
288
00:11:15,270 --> 00:11:18,010
we can take that file and then
convert it with the converter
289
00:11:18,010 --> 00:11:20,610
to a k6 JavaScript test file.
290
00:11:20,610 --> 00:11:24,130
The same could be said also
for our JMeter to k6 converter
291
00:11:24,130 --> 00:11:28,070
for JMX files, Postman
collections, and OpenAPI,
292
00:11:28,070 --> 00:11:30,310
and swagger files as well.
293
00:11:30,310 --> 00:11:31,440
In conjunction to this,
294
00:11:31,440 --> 00:11:32,840
we have numerous integrations
295
00:11:32,840 --> 00:11:35,410
with inside of popular CICD tools.
296
00:11:35,410 --> 00:11:38,260
Depending on the CICD
tool that you're using,
297
00:11:38,260 --> 00:11:41,360
we have a writeup composed
by our develop staff
298
00:11:41,360 --> 00:11:43,810
that walks you through integrating k6
299
00:11:43,810 --> 00:11:46,260
into that CICD solution.
300
00:11:46,260 --> 00:11:47,480
And with that,
301
00:11:47,480 --> 00:11:50,740
also shows you how to configure
the YAML file as well,
302
00:11:50,740 --> 00:11:52,803
which governs the pipeline.
303
00:11:54,240 --> 00:11:56,630
Now, I am by no means a DevOps guy,
304
00:11:56,630 --> 00:11:58,770
I was able to follow the GitLab write up,
305
00:11:58,770 --> 00:12:02,780
and then integrate k6
with inside of GitLab.
306
00:12:02,780 --> 00:12:05,140
I have this pipeline set up on a schedule,
307
00:12:05,140 --> 00:12:08,360
but I could also run it manually
from with inside of here.
308
00:12:08,360 --> 00:12:10,990
And then here, we can see
that my build is starting,
309
00:12:10,990 --> 00:12:12,090
I'm going to deploy it,
310
00:12:12,090 --> 00:12:16,708
and then run a cloud test
against my web application.
311
00:12:16,708 --> 00:12:18,300
So, picking back up here,
312
00:12:18,300 --> 00:12:21,430
we have a link to our
documentation, API reference,
313
00:12:21,430 --> 00:12:26,360
if you need to brush up on
some of our API documentation,
314
00:12:26,360 --> 00:12:27,193
CLI reference,
315
00:12:27,193 --> 00:12:29,800
if you need to take a look
at some popular commands
316
00:12:29,800 --> 00:12:31,720
that can be triggered from our CLI
317
00:12:31,720 --> 00:12:34,600
and port it to the k6 Cloud if need be.
318
00:12:34,600 --> 00:12:36,510
And then my favorite, the Support button.
319
00:12:36,510 --> 00:12:39,680
So, if you run into any issues
with your k6 Cloud account,
320
00:12:39,680 --> 00:12:41,070
just fill out the support ticket,
321
00:12:41,070 --> 00:12:42,959
and we'll get back to you
in a period grade note,
322
00:12:42,959 --> 00:12:45,940
period no greater than 24 hours.
323
00:12:45,940 --> 00:12:46,920
So, we're gonna come back
324
00:12:46,920 --> 00:12:50,000
with inside of our k6 project folder,
325
00:12:50,000 --> 00:12:53,620
and we're gonna get into
test authoring here.
326
00:12:53,620 --> 00:12:55,590
So, at the top right-hand corner,
327
00:12:55,590 --> 00:12:57,660
we have this CREATE NEW TEST button,
328
00:12:57,660 --> 00:12:59,820
and one on the left-hand sidebar here.
329
00:12:59,820 --> 00:13:00,800
If we click that,
330
00:13:00,800 --> 00:13:03,940
we're immediately loaded
up with two unique options.
331
00:13:03,940 --> 00:13:05,500
We have the test builder,
332
00:13:05,500 --> 00:13:07,510
and the script editor here.
333
00:13:07,510 --> 00:13:08,830
Starting with the test builder,
334
00:13:08,830 --> 00:13:12,040
so if your team has a high level
understanding of JavaScript
335
00:13:12,040 --> 00:13:13,520
and they wanna lean on our GUI
336
00:13:13,520 --> 00:13:16,630
to maybe test against some of their APIs,
337
00:13:16,630 --> 00:13:19,670
you're more than encouraged to
start with the test builder.
338
00:13:19,670 --> 00:13:21,620
However, being that we are k6,
339
00:13:21,620 --> 00:13:24,150
and we pride ourselves on
the developer experience,
340
00:13:24,150 --> 00:13:26,290
we always start with the script editor.
341
00:13:26,290 --> 00:13:31,290
So if your team, your engineering
team loves JavaScript,
342
00:13:31,550 --> 00:13:33,390
feels comfortable with JavaScript,
343
00:13:33,390 --> 00:13:36,360
and wants to script
out their test scripts,
344
00:13:36,360 --> 00:13:38,270
you may use the script editor,
345
00:13:38,270 --> 00:13:39,350
and we're gonna start here,
346
00:13:39,350 --> 00:13:42,150
and transition into the test
builder a little bit later.
347
00:13:43,370 --> 00:13:46,170
So, immediately after I click
that START SCRIPTING button,
348
00:13:46,170 --> 00:13:48,740
I'm loaded up with a boiler plate here.
349
00:13:48,740 --> 00:13:51,440
I have my initialization context,
350
00:13:51,440 --> 00:13:52,620
options object,
351
00:13:52,620 --> 00:13:54,460
and export default function.
352
00:13:54,460 --> 00:13:56,890
In terms of k6 JavaScript test files,
353
00:13:56,890 --> 00:13:58,990
these are the three primary components
354
00:13:58,990 --> 00:14:03,300
that comprise all k6
JavaScript test files.
355
00:14:03,300 --> 00:14:05,540
With that, we have our
initialization context,
356
00:14:05,540 --> 00:14:07,090
we're importing any dependencies,
357
00:14:07,090 --> 00:14:10,290
any methods that we'll utilize
with inside of our scripts.
358
00:14:10,290 --> 00:14:11,550
We have our options object,
359
00:14:11,550 --> 00:14:15,210
which houses our stages or
ramping profiles to our test,
360
00:14:15,210 --> 00:14:19,170
which essentially are defining
how long our virtual users
361
00:14:19,170 --> 00:14:21,770
will be sticking around
inside of our test script,
362
00:14:21,770 --> 00:14:23,060
how quickly they'll be entering,
363
00:14:23,060 --> 00:14:25,360
and when they'll be exiting as well.
364
00:14:25,360 --> 00:14:26,880
Additionally, we have thresholds.
365
00:14:26,880 --> 00:14:29,970
Now, thresholds are very
important when it comes to k6,
366
00:14:29,970 --> 00:14:32,200
they are a binary pastel criteria
367
00:14:32,200 --> 00:14:34,800
established to generate
a non-zero exit code
368
00:14:34,800 --> 00:14:36,640
with inside your test.
369
00:14:36,640 --> 00:14:39,840
And then we have our
extension into the k6 Cloud,
370
00:14:39,840 --> 00:14:44,253
where we spin up our load
distributors on AWS currently.
371
00:14:45,550 --> 00:14:47,970
Coming down to our
export default function,
372
00:14:47,970 --> 00:14:49,220
this is the main housing
373
00:14:49,220 --> 00:14:51,663
for all of the logic behind your test.
374
00:14:51,663 --> 00:14:54,350
Now, we're gonna change
gears a little bit.
375
00:14:54,350 --> 00:14:56,540
We have numerous scripting examples
376
00:14:56,540 --> 00:14:59,210
that can be leveraged from with
inside of the script editor.
377
00:14:59,210 --> 00:15:01,600
If you're looking to do
something with authentication,
378
00:15:01,600 --> 00:15:04,760
cookies, correlation of
data, uploading files,
379
00:15:04,760 --> 00:15:06,200
extracting values or tokens,
380
00:15:06,200 --> 00:15:08,580
you can utilize any of
these scripting examples.
381
00:15:08,580 --> 00:15:11,520
So, to show you a basic authentic example,
382
00:15:11,520 --> 00:15:14,340
again, it's the same architecture overall,
383
00:15:14,340 --> 00:15:17,720
initialization context, options object,
384
00:15:17,720 --> 00:15:20,220
a few globally declared constants
385
00:15:20,220 --> 00:15:22,330
that we'll be using with
inside of our logic,
386
00:15:22,330 --> 00:15:25,120
and the logic stored in our
export default function.
387
00:15:25,120 --> 00:15:28,430
Also from with inside of the
k6 Cloud, or the script editor,
388
00:15:28,430 --> 00:15:30,360
we have the ability to upload a HAR file,
389
00:15:30,360 --> 00:15:32,970
as well as record a browser scenario.
390
00:15:32,970 --> 00:15:35,120
Now, we do have a companion tool to this
391
00:15:35,120 --> 00:15:37,260
in the form of a browser extension.
392
00:15:37,260 --> 00:15:39,840
So, we're gonna hop over to
my fake e-commerce store here
393
00:15:39,840 --> 00:15:40,890
that was set up by Tom,
394
00:15:40,890 --> 00:15:41,950
thanks, Tom,
395
00:15:41,950 --> 00:15:42,850
and we're gonna go ahead
396
00:15:42,850 --> 00:15:45,240
and kickstart our browser extension.
397
00:15:45,240 --> 00:15:46,870
Note that this is currently supported
398
00:15:46,870 --> 00:15:50,520
via Google Chrome and Mozilla Firefox.
399
00:15:50,520 --> 00:15:52,010
So, we're gonna start our recorder here
400
00:15:52,010 --> 00:15:53,850
to capture our HAR file.
401
00:15:53,850 --> 00:15:54,683
We're gonna go ahead
402
00:15:54,683 --> 00:15:57,520
and just maybe go through
a standard user journey.
403
00:15:57,520 --> 00:15:59,590
Maybe I've just created this beanie,
404
00:15:59,590 --> 00:16:02,200
and so I want to test the
functionality of that beanie,
405
00:16:02,200 --> 00:16:05,200
ensuring that my users
can purchase this item.
406
00:16:05,200 --> 00:16:07,560
So, I'm gonna go ahead
and add this to my cart,
407
00:16:07,560 --> 00:16:08,593
view my cart.
408
00:16:09,480 --> 00:16:11,110
I see that the beanie has been added,
409
00:16:11,110 --> 00:16:14,480
so that's good news.
410
00:16:14,480 --> 00:16:15,313
Perfect.
411
00:16:15,313 --> 00:16:16,290
So, I'm gonna come down,
412
00:16:16,290 --> 00:16:18,640
I've passed some dummy
data with inside of here
413
00:16:18,640 --> 00:16:20,670
that's been saved via my cookies.
414
00:16:20,670 --> 00:16:23,110
At this stage, I'm gonna go
ahead and place the order,
415
00:16:23,110 --> 00:16:24,290
and at this stage,
416
00:16:24,290 --> 00:16:26,670
an order confirmation should be generated,
417
00:16:26,670 --> 00:16:28,223
and I can stop the recorder.
418
00:16:29,310 --> 00:16:30,823
Everything looks as it should.
419
00:16:31,860 --> 00:16:33,440
I'm gonna stop my recorder,
420
00:16:33,440 --> 00:16:36,350
and I'm immediately brought
back inside of the k6 Cloud.
421
00:16:36,350 --> 00:16:39,760
Now, I have a few options that
I can apply to my recording,
422
00:16:39,760 --> 00:16:42,160
specifically where I want
to house this recording,
423
00:16:42,160 --> 00:16:45,470
maybe with inside of
my new project folder.
424
00:16:45,470 --> 00:16:48,673
I can give this a custom recording name.
425
00:16:52,620 --> 00:16:53,920
And then I have the option
426
00:16:53,920 --> 00:16:56,810
to bring this into the test
builder or script editor.
427
00:16:56,810 --> 00:16:57,730
For the time being,
428
00:16:57,730 --> 00:17:00,840
we're gonna pick up with
the test builder here
429
00:17:00,840 --> 00:17:02,610
and demonstrate that functionality,
430
00:17:02,610 --> 00:17:05,620
as we've already taken a
look at the script editor.
431
00:17:05,620 --> 00:17:07,730
And then I have a few
options directly below
432
00:17:07,730 --> 00:17:09,910
that I can apply to my recording as well.
433
00:17:09,910 --> 00:17:12,480
The first is correlation of
request and response data.
434
00:17:12,480 --> 00:17:15,010
So, if we detect any CSRF elements
435
00:17:15,010 --> 00:17:17,540
that are part of your recording,
436
00:17:17,540 --> 00:17:20,140
we'll do our best to
correlate those values.
437
00:17:20,140 --> 00:17:22,120
Directly below that,
438
00:17:22,120 --> 00:17:23,820
left unchecked by default,
439
00:17:23,820 --> 00:17:25,960
is the inclusion of static assets,
440
00:17:25,960 --> 00:17:27,300
and the reasoning for this is,
441
00:17:27,300 --> 00:17:28,450
nine times out of ten,
442
00:17:28,450 --> 00:17:31,720
the elements are served
up from a CDN provider,
443
00:17:31,720 --> 00:17:33,740
so unless you're trying
to test the performance
444
00:17:33,740 --> 00:17:35,010
of your CDN provider,
445
00:17:35,010 --> 00:17:37,973
you normally wouldn't include
these in your results data.
446
00:17:38,940 --> 00:17:42,030
Lastly, we have the ability
to include sleep times.
447
00:17:42,030 --> 00:17:44,690
Sleep times are beneficial twofold.
448
00:17:44,690 --> 00:17:46,680
First, it prevents the load generators
449
00:17:46,680 --> 00:17:50,150
from becoming overworked,
thus creating race conditions.
450
00:17:50,150 --> 00:17:52,890
The second, sleep
conditions actually emulate
451
00:17:52,890 --> 00:17:54,573
actual user behavior,
452
00:17:55,450 --> 00:17:59,700
thus leaving you with more
authentic performance data.
453
00:17:59,700 --> 00:18:00,790
And last but not least,
454
00:18:00,790 --> 00:18:02,410
we have third party domains filtering.
455
00:18:02,410 --> 00:18:05,270
So, if we detect any Google fonts, APIs,
456
00:18:05,270 --> 00:18:06,810
analytics, et cetera,
457
00:18:06,810 --> 00:18:10,190
we will omit those
unless you select below.
458
00:18:10,190 --> 00:18:12,060
So, we're gonna save our
recording, bring it in,
459
00:18:12,060 --> 00:18:13,220
and we're gonna take a look at it
460
00:18:13,220 --> 00:18:15,530
with inside of the test builder.
461
00:18:15,530 --> 00:18:16,363
For the time being,
462
00:18:16,363 --> 00:18:18,800
and we're gonna skip past
our load zones, ramping VUs,
463
00:18:18,800 --> 00:18:20,330
thresholds, Cloud APM,
464
00:18:20,330 --> 00:18:25,330
and take a look at our requests
behind our script here.
465
00:18:26,320 --> 00:18:29,870
So, taking a look, if we
needed to add requests,
466
00:18:29,870 --> 00:18:31,650
we can simply, at the click of a button,
467
00:18:31,650 --> 00:18:33,390
add additional requests.
468
00:18:33,390 --> 00:18:34,360
If we wanted to,
469
00:18:34,360 --> 00:18:36,860
we could change this
from maybe a GET request.
470
00:18:36,860 --> 00:18:38,700
Maybe we're posting some data,
471
00:18:38,700 --> 00:18:40,823
so we can change this to a POST request,
472
00:18:41,730 --> 00:18:44,210
or any of the other requests
with inside of here.
473
00:18:44,210 --> 00:18:47,093
And then I simply just pass
in my API endpoint here.
474
00:18:48,500 --> 00:18:49,700
Now I'm gonna remove that,
475
00:18:49,700 --> 00:18:52,020
and just hop right into our request.
476
00:18:52,020 --> 00:18:53,750
So, all of our requests are grouped
477
00:18:53,750 --> 00:18:56,240
according to the pages
that have been recorded.
478
00:18:56,240 --> 00:18:58,320
So, with that, we have page one,
479
00:18:58,320 --> 00:18:59,970
which was our e-commerce site.
480
00:18:59,970 --> 00:19:01,430
We made a POST request here,
481
00:19:01,430 --> 00:19:03,590
we captured some header information,
482
00:19:03,590 --> 00:19:07,660
as well as our add to
cart query parameter.
483
00:19:07,660 --> 00:19:09,690
And then we captured the product
484
00:19:09,690 --> 00:19:12,870
that was being posted to our cart here.
485
00:19:12,870 --> 00:19:14,630
We slept for two seconds,
486
00:19:14,630 --> 00:19:16,640
then we came to our cart,
487
00:19:16,640 --> 00:19:20,030
we made a GET request, captured
some header information,
488
00:19:20,030 --> 00:19:22,220
saw what was inside of our cart,
489
00:19:22,220 --> 00:19:25,050
slept for 4.6 seconds.
490
00:19:25,050 --> 00:19:26,970
Coming down to page three,
491
00:19:26,970 --> 00:19:28,790
we began the checkout process,
492
00:19:28,790 --> 00:19:30,070
we made a GET request,
493
00:19:30,070 --> 00:19:32,610
reading what was side
of the checkout process,
494
00:19:32,610 --> 00:19:34,670
we captured some header information,
495
00:19:34,670 --> 00:19:36,040
slept for a second,
496
00:19:36,040 --> 00:19:37,390
made a POST request
497
00:19:37,390 --> 00:19:41,250
passing in the parameters
that needed to be added,
498
00:19:41,250 --> 00:19:44,573
such as the billing information
to update the order review,
499
00:19:45,840 --> 00:19:48,090
we slept for 8.9 seconds,
500
00:19:48,090 --> 00:19:52,170
and then we passed the user
information that was captured
501
00:19:52,170 --> 00:19:55,000
as part of this order with inside of it
502
00:19:55,000 --> 00:19:57,020
to the checkout process.
503
00:19:57,020 --> 00:20:00,260
Finally, sleeping for almost four seconds,
504
00:20:00,260 --> 00:20:01,690
coming to our last page,
505
00:20:01,690 --> 00:20:03,150
where we've completed the order,
506
00:20:03,150 --> 00:20:05,690
we made a GET request
reading the information
507
00:20:05,690 --> 00:20:07,700
that was part of our order,
508
00:20:07,700 --> 00:20:10,430
and captured that query parameter,
509
00:20:10,430 --> 00:20:12,030
slept for 1.2 seconds,
510
00:20:12,030 --> 00:20:15,763
and then finally ending
with our AJAX refreshment.
511
00:20:17,526 --> 00:20:18,920
Now, where the real magic comes in
512
00:20:18,920 --> 00:20:21,290
is I just captured the HAR recording
513
00:20:21,290 --> 00:20:23,450
and brought it into the test builder.
514
00:20:23,450 --> 00:20:25,760
Now I can flip this switch
515
00:20:25,760 --> 00:20:28,790
from the test builder
to the script editor,
516
00:20:28,790 --> 00:20:29,840
and voila,
517
00:20:29,840 --> 00:20:33,260
a test script has been
automatically created for me.
518
00:20:33,260 --> 00:20:36,370
Now, it cut down on some
manual coding from my end,
519
00:20:36,370 --> 00:20:39,270
and additionally, I now
have this full test script
520
00:20:39,270 --> 00:20:41,830
with all the logic behind my test.
521
00:20:41,830 --> 00:20:42,900
What I can do now
522
00:20:42,900 --> 00:20:45,600
is I can select everything,
copy the script,
523
00:20:45,600 --> 00:20:49,150
bring it to my local IDE for
any changes or version control,
524
00:20:49,150 --> 00:20:50,830
I can create the test as is
525
00:20:50,830 --> 00:20:52,330
from the script that's been generated,
526
00:20:52,330 --> 00:20:54,280
and save that and run it at a later date
527
00:20:54,280 --> 00:20:56,150
with inside of the k6 Cloud.
528
00:20:56,150 --> 00:21:00,170
Or there are a few changes that
I can make to the parameters
529
00:21:00,170 --> 00:21:02,500
with inside of the options object.
530
00:21:02,500 --> 00:21:05,320
So, coming up to our Load
zones first, starting here,
531
00:21:05,320 --> 00:21:07,440
we can add additional load zones.
532
00:21:07,440 --> 00:21:11,030
Let's say I have a use case
where most of my clients
533
00:21:11,030 --> 00:21:14,980
are based within Great Britain,
534
00:21:14,980 --> 00:21:18,270
and then I have a few additional
clients based globally,
535
00:21:18,270 --> 00:21:19,600
and in Sao Paulo.
536
00:21:19,600 --> 00:21:21,300
So what I wanna do here
537
00:21:21,300 --> 00:21:24,300
is maybe I wanna do a split distribution.
538
00:21:24,300 --> 00:21:27,670
I wanna push most of my load to Sao Paulo.
539
00:21:27,670 --> 00:21:31,700
I'll put a little bit
more to Great Britain,
540
00:21:31,700 --> 00:21:33,770
or I wanna push most of
my load to Great Britain,
541
00:21:33,770 --> 00:21:35,950
some to Sao Paulo, and some to Ashburn
542
00:21:35,950 --> 00:21:38,293
so I have a globally
distributed test here.
543
00:21:39,620 --> 00:21:40,453
Additionally,
544
00:21:40,453 --> 00:21:42,090
I'm gonna come down to
my Ramping Virtual Users,
545
00:21:42,090 --> 00:21:44,060
or the stages of my test.
546
00:21:44,060 --> 00:21:46,380
20 virtual users looks kinda low,
547
00:21:46,380 --> 00:21:48,140
so I'm gonna go ahead and modify this.
548
00:21:48,140 --> 00:21:51,210
I'm gonna throw maybe
200 virtual users at it.
549
00:21:51,210 --> 00:21:52,420
So, during that first minute,
550
00:21:52,420 --> 00:21:54,680
I'm gonna scale up to 200 virtual users,
551
00:21:54,680 --> 00:21:58,130
hold them for a duration of
3 minutes and 30 seconds,
552
00:21:58,130 --> 00:21:59,250
and then during the last minute,
553
00:21:59,250 --> 00:22:03,320
I'll scale down to zero
virtual users in my test.
554
00:22:03,320 --> 00:22:04,420
We have our threshold
555
00:22:04,420 --> 00:22:07,040
so we can build new
thresholds behind our test.
556
00:22:07,040 --> 00:22:09,610
Let's say we wanna do
a response time metric
557
00:22:09,610 --> 00:22:11,660
across all URLs.
558
00:22:11,660 --> 00:22:14,150
We wanna test according
to the 95th percentile
559
00:22:14,150 --> 00:22:18,180
for a condition less
than 1,000 milliseconds.
560
00:22:18,180 --> 00:22:21,560
And we wanna be a little bit
strict with this threshold,
561
00:22:21,560 --> 00:22:25,343
so we're gonna set up the
STOP TEST flag here as well.
562
00:22:26,690 --> 00:22:27,670
Coming down.
563
00:22:27,670 --> 00:22:29,740
So, we have our Cloud APM section,
564
00:22:29,740 --> 00:22:32,170
where we can create new configurations,
565
00:22:32,170 --> 00:22:34,290
but we need to save that first,
566
00:22:34,290 --> 00:22:36,110
so we're gonna go ahead and save that,
567
00:22:36,110 --> 00:22:38,760
and we can create a new configuration.
568
00:22:38,760 --> 00:22:39,713
I'm gonna back up.
569
00:22:40,890 --> 00:22:43,760
And lastly, we have our
requests, as we've seen.
570
00:22:43,760 --> 00:22:45,780
So, I'm gonna go ahead
and convert the script
571
00:22:45,780 --> 00:22:47,933
from the builder to the script editor.
572
00:22:48,840 --> 00:22:51,760
And now we can see the
changes that I've implemented
573
00:22:51,760 --> 00:22:53,720
using my GUI here.
574
00:22:53,720 --> 00:22:55,220
I now have three load zones,
575
00:22:55,220 --> 00:22:58,170
with most of my load being
pushed towards London,
576
00:22:58,170 --> 00:23:01,630
also split between Sao Paulo and Ashburn.
577
00:23:01,630 --> 00:23:03,410
I have 200 virtual users
578
00:23:03,410 --> 00:23:06,530
across a 5 minute and 30 second duration,
579
00:23:06,530 --> 00:23:08,920
and then I have that
threshold that we included,
580
00:23:08,920 --> 00:23:11,080
the HTTP request duration threshold
581
00:23:11,080 --> 00:23:12,910
measured across the 95th percentile
582
00:23:12,910 --> 00:23:15,330
for a condition less
than 1,000 milliseconds.
583
00:23:15,330 --> 00:23:19,930
And we have that abortOnFail
flag set to true,
584
00:23:19,930 --> 00:23:22,020
so if this threshold is exceeded,
585
00:23:22,020 --> 00:23:23,500
the test will stop completely
586
00:23:23,500 --> 00:23:26,320
and output metrics to either the k6 Cloud,
587
00:23:26,320 --> 00:23:28,550
or to the command line if run locally.
588
00:23:28,550 --> 00:23:32,910
So, we're gonna hook up our IDE here,
589
00:23:32,910 --> 00:23:35,800
and take a look at a
pre-composed test script.
590
00:23:35,800 --> 00:23:39,170
I'm gonna go ahead and get
rid of my integrated terminal,
591
00:23:39,170 --> 00:23:44,130
just so we can take a look
at some of the logic here.
592
00:23:44,130 --> 00:23:45,920
Now, starting off with our test script,
593
00:23:45,920 --> 00:23:48,630
we're going to be importing
a few methods to help us out.
594
00:23:48,630 --> 00:23:53,420
So, we're importing http,
allowing us to make HTTP requests.
595
00:23:53,420 --> 00:23:56,050
We're gonna import check,
group, and sleep from k6,
596
00:23:56,050 --> 00:23:58,860
a few methods that will help
us out a little bit later.
597
00:23:58,860 --> 00:24:01,520
And then we're gonna import
Counter, Rate, and Trend,
598
00:24:01,520 --> 00:24:04,085
allowing us to configure custom metrics.
599
00:24:04,085 --> 00:24:06,170
Additionally, and most noticeably,
600
00:24:06,170 --> 00:24:08,080
we're going to import randomInBetween
601
00:24:08,080 --> 00:24:09,950
from our JS hosted library.
602
00:24:09,950 --> 00:24:13,100
And then, we have this
users.json document here.
603
00:24:13,100 --> 00:24:13,933
Taking a look,
604
00:24:13,933 --> 00:24:16,490
so I have six users with inside of here.
605
00:24:16,490 --> 00:24:18,910
I have one user with valid credentials.
606
00:24:18,910 --> 00:24:19,860
This is a hint,
607
00:24:19,860 --> 00:24:22,310
we intentionally designed
this test to fail,
608
00:24:22,310 --> 00:24:24,903
thus producing some custom
metrics on the back end.
609
00:24:26,330 --> 00:24:27,850
So, we are gonna open up that document,
610
00:24:27,850 --> 00:24:29,300
parse in the JSON data,
611
00:24:29,300 --> 00:24:32,210
save that to our login data object here.
612
00:24:32,210 --> 00:24:34,270
Getting into our options object.
613
00:24:34,270 --> 00:24:37,150
So, we have our stages
or ramping profiles,
614
00:24:37,150 --> 00:24:40,870
we've configured a 200 virtual
user five minute load test.
615
00:24:40,870 --> 00:24:42,380
Coming down to our thresholds.
616
00:24:42,380 --> 00:24:45,000
We have three unique
thresholds with inside of here,
617
00:24:45,000 --> 00:24:47,490
the first is an HTTP
request duration threshold
618
00:24:47,490 --> 00:24:49,340
measured across the 95th percentile
619
00:24:49,340 --> 00:24:51,900
for a condition less than
a hundred milliseconds.
620
00:24:51,900 --> 00:24:54,720
The second HTTP request
duration threshold,
621
00:24:54,720 --> 00:24:55,990
very similar to the first,
622
00:24:55,990 --> 00:24:58,660
but here, we're introducing
the concept of tagging.
623
00:24:58,660 --> 00:25:01,310
Tagging allows us to visualize
the individual metric
624
00:25:01,310 --> 00:25:03,970
being measured across
this given threshold,
625
00:25:03,970 --> 00:25:06,000
and while I don't believe
that anybody would measure
626
00:25:06,000 --> 00:25:09,360
across us the 73rd percentile,
we wanted to illustrate this,
627
00:25:09,360 --> 00:25:11,240
just demonstrating that our percentiles
628
00:25:11,240 --> 00:25:12,973
are not rigid by any means.
629
00:25:13,820 --> 00:25:15,580
Lastly, we have our check failure rating,
630
00:25:15,580 --> 00:25:18,800
measuring for a rate less than 30%.
631
00:25:18,800 --> 00:25:22,000
We then get into our k6 Cloud extension.
632
00:25:22,000 --> 00:25:23,240
We have our project ID,
633
00:25:23,240 --> 00:25:25,960
which can be sourced from the
top of our project folder,
634
00:25:25,960 --> 00:25:27,550
if you recall.
635
00:25:27,550 --> 00:25:29,070
So, we can copy that,
636
00:25:29,070 --> 00:25:31,490
come back to our ID, plug that in,
637
00:25:31,490 --> 00:25:33,390
and we're all ready to go.
638
00:25:33,390 --> 00:25:34,380
We have the name of our test,
639
00:25:34,380 --> 00:25:36,530
"Insight Demo with Cloud Execution."
640
00:25:36,530 --> 00:25:39,140
Again, just a method
of good practice here,
641
00:25:39,140 --> 00:25:40,210
especially if you're working
642
00:25:40,210 --> 00:25:42,020
with inside of development folders
643
00:25:42,020 --> 00:25:43,300
that have naming conventions,
644
00:25:43,300 --> 00:25:46,540
you wanna make sure that you're
triggering the right test.
645
00:25:46,540 --> 00:25:48,310
We then have our are load distributions,
646
00:25:48,310 --> 00:25:50,890
this has been configured
across three load zones,
647
00:25:50,890 --> 00:25:52,990
Dublin, Ashburn, and Columbus,
648
00:25:52,990 --> 00:25:56,320
with most of my load being
pushed across Columbus.
649
00:25:56,320 --> 00:25:58,440
Additionally here, we have a note
650
00:25:58,440 --> 00:26:01,060
that we'll be passing as
an environment variable.
651
00:26:01,060 --> 00:26:03,360
So, if you have a note for
one of your developers,
652
00:26:03,360 --> 00:26:04,910
maybe you want to push a test
653
00:26:04,910 --> 00:26:07,070
so that they can review
it at a later date,
654
00:26:07,070 --> 00:26:10,053
you can include a note as
an environment variable.
655
00:26:11,490 --> 00:26:13,210
Moving down to line 33,
656
00:26:13,210 --> 00:26:15,310
I have a counter for my successful logins,
657
00:26:16,865 --> 00:26:18,330
a rate for my check failure rating,
658
00:26:18,330 --> 00:26:20,660
and my trend for time to first bite.
659
00:26:20,660 --> 00:26:23,240
And then we get to the
export default function.
660
00:26:23,240 --> 00:26:24,950
As previously mentioned,
661
00:26:24,950 --> 00:26:26,280
the export default function
662
00:26:26,280 --> 00:26:28,340
is critically important for two reasons.
663
00:26:28,340 --> 00:26:29,390
This is the main housing
664
00:26:29,390 --> 00:26:31,710
for all of your logic
behind your test script.
665
00:26:31,710 --> 00:26:33,190
This is also the main entry point
666
00:26:33,190 --> 00:26:34,913
for all of your virtual users.
667
00:26:35,780 --> 00:26:37,980
So, we're gonna open this
up here, take a look.
668
00:26:37,980 --> 00:26:39,880
We're gonna enter into
our front page group.
669
00:26:39,880 --> 00:26:41,760
We're gonna define our response object,
670
00:26:41,760 --> 00:26:43,510
setting it equal to null.
671
00:26:43,510 --> 00:26:46,300
We set our response object
equal to our GET request
672
00:26:46,300 --> 00:26:47,540
against our test URL,
673
00:26:47,540 --> 00:26:50,470
bringing in our Math.round
and randomInBetween function
674
00:26:50,470 --> 00:26:53,350
to generate a position
between 1 and 2,000.
675
00:26:53,350 --> 00:26:55,070
And then here, we're
gonna apply some tags,
676
00:26:55,070 --> 00:26:57,160
such as in the name tag with our test URL,
677
00:26:57,160 --> 00:26:58,690
and an Aggregated flag,
678
00:26:58,690 --> 00:27:01,860
because I wanna visualize
my initial HTTP request
679
00:27:01,860 --> 00:27:03,223
as part of my test script.
680
00:27:04,060 --> 00:27:06,220
Additionally, I'm gonna
run some checks here,
681
00:27:06,220 --> 00:27:08,440
making sure that the homepage
body size is a given length,
682
00:27:08,440 --> 00:27:11,500
and that our homepage welcome
header is present on the page.
683
00:27:11,500 --> 00:27:12,920
We record our check failures,
684
00:27:12,920 --> 00:27:14,500
record our time to first bite,
685
00:27:14,500 --> 00:27:17,460
tagging that with the
time to first bite URL,
686
00:27:17,460 --> 00:27:20,110
and then we move into
our static assets group.
687
00:27:20,110 --> 00:27:22,050
We let our response time,
688
00:27:22,050 --> 00:27:23,000
we let, (chuckles)
689
00:27:23,000 --> 00:27:24,380
excuse me,
690
00:27:24,380 --> 00:27:28,220
we let our response object
equal our batch method here.
691
00:27:28,220 --> 00:27:30,020
And as I mentioned a little bit earlier,
692
00:27:30,020 --> 00:27:32,840
so the batch method gives you
an added layer of concurrency
693
00:27:32,840 --> 00:27:34,820
while making parallel requests.
694
00:27:34,820 --> 00:27:36,500
So in the example configured below,
695
00:27:36,500 --> 00:27:38,930
I'm making a GET request to my CSS,
696
00:27:38,930 --> 00:27:40,490
another to the JS here,
697
00:27:40,490 --> 00:27:43,070
and I'm applying some tags,
such as a static asset tag,
698
00:27:43,070 --> 00:27:44,660
other tag, and name tag,
699
00:27:44,660 --> 00:27:48,050
to help out with the
visualization a little bit later.
700
00:27:48,050 --> 00:27:48,883
I run the check,
701
00:27:48,883 --> 00:27:50,910
making sure that my style
sheet is a given length.
702
00:27:50,910 --> 00:27:52,450
I record my check failures,
703
00:27:52,450 --> 00:27:53,980
record my time to first bite,
704
00:27:53,980 --> 00:27:56,520
tagging that with the
time to first bite URL
705
00:27:56,520 --> 00:27:58,160
and static asset tag.
706
00:27:58,160 --> 00:28:00,500
And then I close my static assets group.
707
00:28:00,500 --> 00:28:02,230
I close my login group,
708
00:28:02,230 --> 00:28:03,880
and then I let my virtual users sleep
709
00:28:03,880 --> 00:28:06,893
for a random duration
between 1 and 15 seconds.
710
00:28:07,860 --> 00:28:10,330
Then I begin to move
into my login group here.
711
00:28:10,330 --> 00:28:12,700
I let my response object
equal my GET requests
712
00:28:12,700 --> 00:28:13,990
against my messages page.
713
00:28:13,990 --> 00:28:16,810
I run a check making sure
no unauthorized users
714
00:28:16,810 --> 00:28:18,270
are present on the page.
715
00:28:18,270 --> 00:28:21,770
I extract the CSRF token.
716
00:28:21,770 --> 00:28:24,230
Using the response object,
as well as the HTML,
717
00:28:24,230 --> 00:28:26,370
I find in the input name CSRF token,
718
00:28:26,370 --> 00:28:27,320
take the first value,
719
00:28:27,320 --> 00:28:28,963
and then I extract that value.
720
00:28:29,920 --> 00:28:33,273
I run a check, or I record
my check failure rating.
721
00:28:34,400 --> 00:28:35,990
And then here's where we begin to use
722
00:28:35,990 --> 00:28:38,460
that users.json document.
723
00:28:38,460 --> 00:28:41,410
So here, I'm bringing in my
Math.floor and Math.random
724
00:28:41,410 --> 00:28:44,980
to randomly loop through
that users.json document,
725
00:28:44,980 --> 00:28:46,570
and then I select their position
726
00:28:46,570 --> 00:28:48,920
and save that to my position variable.
727
00:28:48,920 --> 00:28:50,010
On line 99,
728
00:28:50,010 --> 00:28:52,920
I reuse my position variable
to extract the credentials
729
00:28:52,920 --> 00:28:54,250
from that user's position,
730
00:28:54,250 --> 00:28:57,020
and I save them to my credentials object.
731
00:28:57,020 --> 00:28:58,710
And then on line 101,
732
00:28:58,710 --> 00:29:01,720
I set my response object
equal to my POST request
733
00:29:01,720 --> 00:29:03,030
against the login page,
734
00:29:03,030 --> 00:29:06,400
passing in my username,
password, and that CSRF token,
735
00:29:06,400 --> 00:29:07,520
and then I run a check
736
00:29:07,520 --> 00:29:09,630
making sure that the
logged in welcome header
737
00:29:09,630 --> 00:29:11,013
is present on the page.
738
00:29:13,813 --> 00:29:14,860
Just wanna move this over a little bit
739
00:29:14,860 --> 00:29:16,423
so everybody can see the logic.
740
00:29:17,760 --> 00:29:19,160
Now coming down, I run a check,
741
00:29:19,160 --> 00:29:21,620
making sure that the logged
in welcome header is present.
742
00:29:21,620 --> 00:29:24,060
I record my successful logins,
743
00:29:24,060 --> 00:29:25,490
record my check failures,
744
00:29:25,490 --> 00:29:26,870
tagging that with the page login,
745
00:29:26,870 --> 00:29:28,410
record my time to first bite,
746
00:29:28,410 --> 00:29:30,620
tagging that with the
time to first bite URL,
747
00:29:30,620 --> 00:29:33,803
I sleep for 10 seconds, and
then I console log my note.
748
00:29:34,980 --> 00:29:36,770
I close my login group,
749
00:29:36,770 --> 00:29:38,690
and close my export default function.
750
00:29:38,690 --> 00:29:40,807
Now, a common question that we receive is,
751
00:29:40,807 --> 00:29:43,000
"I've reached the end of my test script,
752
00:29:43,000 --> 00:29:44,530
my test has been executed.
753
00:29:44,530 --> 00:29:46,030
What do my virtual users do,
754
00:29:46,030 --> 00:29:47,950
now that they've reached
the end of the test script
755
00:29:47,950 --> 00:29:49,540
and there's time remaining on the test?"
756
00:29:49,540 --> 00:29:51,390
And it's an excellent question.
757
00:29:51,390 --> 00:29:52,223
So with that,
758
00:29:52,223 --> 00:29:54,310
our virtual users will loop over the logic
759
00:29:54,310 --> 00:29:57,630
stored with inside of our
export default function
760
00:29:57,630 --> 00:30:00,253
until time has expired on the test.
761
00:30:02,070 --> 00:30:03,810
So, with the logic we have stored
762
00:30:03,810 --> 00:30:05,510
in our export default function,
763
00:30:05,510 --> 00:30:08,830
and a 200 virtual user,
five minute load test,
764
00:30:08,830 --> 00:30:13,350
we're capable of achieving
roughly 2,500 unique session IPs.
765
00:30:13,350 --> 00:30:16,280
So something to give the
folks watching an idea
766
00:30:16,280 --> 00:30:18,580
on the capabilities of
our load generators,
767
00:30:18,580 --> 00:30:20,530
as we spoke about a little bit earlier.
768
00:30:21,750 --> 00:30:22,597
So with that,
769
00:30:22,597 --> 00:30:25,660
I'm gonna go ahead and pull
up an integrated terminal here
770
00:30:25,660 --> 00:30:27,223
with inside of VS Code,
771
00:30:28,080 --> 00:30:31,170
and make this a little bit bigger.
772
00:30:31,170 --> 00:30:32,640
I'm gonna go ahead and run the command
773
00:30:32,640 --> 00:30:34,260
k6 run,
774
00:30:34,260 --> 00:30:35,450
or k6,
775
00:30:35,450 --> 00:30:38,123
to validate that k6 is
installed on my local machine.
776
00:30:39,090 --> 00:30:41,410
I see k6 is installed here.
777
00:30:41,410 --> 00:30:43,340
So, I have a list of available commands,
778
00:30:43,340 --> 00:30:45,860
as well as some flags that we can set up
779
00:30:45,860 --> 00:30:48,223
with the k6 Open Source Solution.
780
00:30:49,080 --> 00:30:50,440
So, in addition to this,
781
00:30:50,440 --> 00:30:54,390
k6 has one distinct run mode
for the Open Source Solution,
782
00:30:54,390 --> 00:30:56,335
which is "k6 run."
783
00:30:56,335 --> 00:30:58,850
Here, we can use k6 Open Source
784
00:30:58,850 --> 00:31:02,480
to do some quick and local
debugging of our test scripts
785
00:31:02,480 --> 00:31:05,180
before we push this up to the k6 Cloud.
786
00:31:05,180 --> 00:31:07,700
So, I'm going to use a shorter test script
787
00:31:07,700 --> 00:31:09,800
for demo purposes.
788
00:31:09,800 --> 00:31:12,260
I'm going to pass in the
name of my test script here.
789
00:31:12,260 --> 00:31:14,510
I'm gonna set a flag for my iterator,
790
00:31:14,510 --> 00:31:15,630
which is -i.
791
00:31:15,630 --> 00:31:17,480
I'll do one iteration.
792
00:31:17,480 --> 00:31:19,643
I'll set another flag,
-u, for our virtual users.
793
00:31:19,643 --> 00:31:21,690
I'll do one virtual user.
794
00:31:21,690 --> 00:31:22,523
And then lastly,
795
00:31:22,523 --> 00:31:25,453
we'll set a flag here for --http-debug.
796
00:31:26,950 --> 00:31:28,560
Immediately after kicking that off,
797
00:31:28,560 --> 00:31:31,650
we get a copy of the
request and response data,
798
00:31:31,650 --> 00:31:34,270
a copy of the page data,
a copy of our test script,
799
00:31:34,270 --> 00:31:36,360
some more request and response data,
800
00:31:36,360 --> 00:31:38,380
we see that our checks
came back successfully,
801
00:31:38,380 --> 00:31:42,980
and we have the metrics
captured by k6 Open Source.
802
00:31:42,980 --> 00:31:44,907
Now, another common
question we receive is,
803
00:31:44,907 --> 00:31:47,630
"How can I get my results
in a different format?"
804
00:31:47,630 --> 00:31:49,530
Maybe I don't want the raw metrics,
805
00:31:49,530 --> 00:31:52,120
I wanna see these in JSON data
806
00:31:52,120 --> 00:31:55,390
because I wanna make a
REST API out of my metrics,
807
00:31:55,390 --> 00:31:57,440
or something along the lines of that.
808
00:31:57,440 --> 00:32:01,610
So, what we can do is modify
the flag for our test to --out.
809
00:32:01,610 --> 00:32:03,603
And say I want it in a JSON format,
810
00:32:04,520 --> 00:32:05,900
I kick that off,
811
00:32:05,900 --> 00:32:08,393
I get my metrics back in a JSON format.
812
00:32:09,290 --> 00:32:11,150
Now, last but not least,
813
00:32:11,150 --> 00:32:14,820
one of the most common uses
is testing private APIs
814
00:32:14,820 --> 00:32:16,840
that may be behind a firewall.
815
00:32:16,840 --> 00:32:18,470
So, with k6 Open Source,
816
00:32:18,470 --> 00:32:22,020
what we can do is do k6 run,
817
00:32:22,020 --> 00:32:23,513
the name of our test script,
818
00:32:25,672 --> 00:32:28,510
and then we can set a
flag gear for -o cloud.
819
00:32:28,510 --> 00:32:32,460
What this will tell k6
to do is execute locally,
820
00:32:32,460 --> 00:32:33,890
bundle up our results data,
821
00:32:33,890 --> 00:32:36,720
and then stream it
directly to the k6 Cloud,
822
00:32:36,720 --> 00:32:37,810
where we'll pull it down,
823
00:32:37,810 --> 00:32:39,930
and then present it with
inside of the dashboards
824
00:32:39,930 --> 00:32:41,490
that we've created.
825
00:32:41,490 --> 00:32:43,837
Now, the viewers maybe asking yourselves,
826
00:32:43,837 --> 00:32:47,970
"Okay, well how do I access
the Cloud method of execution?
827
00:32:47,970 --> 00:32:50,010
Because you told me that k6 Open Source
828
00:32:50,010 --> 00:32:52,317
only has one run mode."
829
00:32:53,220 --> 00:32:55,840
This can be achieved
with a k6 Cloud account,
830
00:32:55,840 --> 00:32:58,410
whether that's a trial,
a full page subscription,
831
00:32:58,410 --> 00:33:00,260
whatever the case may be,
832
00:33:00,260 --> 00:33:02,200
by running k6 login cloud,
833
00:33:02,200 --> 00:33:04,810
passing in my email as
well as my password,
834
00:33:04,810 --> 00:33:06,520
and authenticating that way.
835
00:33:06,520 --> 00:33:07,353
Additionally,
836
00:33:07,353 --> 00:33:09,950
as I demonstrated during the
beginning of the walkthrough,
837
00:33:09,950 --> 00:33:13,160
we can set a flag here,
come back to the web app,
838
00:33:13,160 --> 00:33:15,940
grab our API token from our user settings,
839
00:33:15,940 --> 00:33:18,810
and then pass that as an
argument on the command line.
840
00:33:18,810 --> 00:33:19,990
Once authenticated,
841
00:33:19,990 --> 00:33:22,840
we can take advantage of the
Cloud method of execution,
842
00:33:22,840 --> 00:33:24,690
which is k6 cloud,
843
00:33:24,690 --> 00:33:26,060
and now we're gonna trigger a test
844
00:33:26,060 --> 00:33:29,190
against the test group that
we just walked through,
845
00:33:29,190 --> 00:33:31,160
which was website.js.
846
00:33:31,160 --> 00:33:34,550
I'm gonna set up a flag here
for my environment variable.
847
00:33:34,550 --> 00:33:38,887
I'm gonna set my note equal to, let's say,
848
00:33:38,887 --> 00:33:40,817
"New Test 10.1.2021"
849
00:33:47,260 --> 00:33:48,963
And now I'm gonna trigger this.
850
00:33:49,832 --> 00:33:53,630
k6 Cloud is now taking a look at my test.
851
00:33:53,630 --> 00:33:54,463
In the terminal,
852
00:33:54,463 --> 00:33:56,397
I can validate that my
method of execution is cloud,
853
00:33:56,397 --> 00:33:58,640
and the script is website.js.
854
00:33:58,640 --> 00:33:59,780
I have a link to the output,
855
00:33:59,780 --> 00:34:02,253
which directs many to the k6 Cloud.
856
00:34:03,760 --> 00:34:05,100
At this stage of the game,
857
00:34:05,100 --> 00:34:08,170
k6 is looping through my
initialization context,
858
00:34:08,170 --> 00:34:09,990
importing any dependencies,
859
00:34:09,990 --> 00:34:12,400
taking a look at that users.json file,
860
00:34:12,400 --> 00:34:14,700
including it with inside of my test,
861
00:34:14,700 --> 00:34:17,510
setting up my stages, my thresholds,
862
00:34:17,510 --> 00:34:20,016
and extension with inside of the k6 Cloud,
863
00:34:20,016 --> 00:34:22,890
where I'm spitting up that load zone,
864
00:34:22,890 --> 00:34:25,903
and then including my note as
an environment variable here.
865
00:34:27,070 --> 00:34:29,610
We then set up the custom metrics,
866
00:34:29,610 --> 00:34:31,830
and k6 will loop through the logic
867
00:34:31,830 --> 00:34:34,483
stored with inside of my
export default function.
868
00:34:35,480 --> 00:34:38,820
Within a second, I believe
this will begin to run.
869
00:34:38,820 --> 00:34:42,610
Yep, and we can see our tests
begin to run in real time.
870
00:34:42,610 --> 00:34:47,610
So, this will start to begin
to console log our notes,
871
00:34:48,170 --> 00:34:51,790
so I'm going to access
our result data here now.
872
00:34:51,790 --> 00:34:53,630
So, with inside of the k6 Cloud,
873
00:34:53,630 --> 00:34:56,203
I can visualize my test
running in real time.
874
00:34:57,170 --> 00:35:00,200
I can see the test duration remaining,
875
00:35:00,200 --> 00:35:01,590
total test duration,
876
00:35:01,590 --> 00:35:03,270
total number of virtual users,
877
00:35:03,270 --> 00:35:04,860
distributed load zones.
878
00:35:04,860 --> 00:35:07,040
If I hover over these load zones,
879
00:35:07,040 --> 00:35:10,110
I can see the public IP, as
well as the instance size,
880
00:35:10,110 --> 00:35:12,340
I can see who this test was started by,
881
00:35:12,340 --> 00:35:13,950
and I can also read that note
882
00:35:13,950 --> 00:35:16,333
that I passed as an environment variable.
883
00:35:18,710 --> 00:35:19,620
Directly below that,
884
00:35:19,620 --> 00:35:21,030
we have the performance overview,
885
00:35:21,030 --> 00:35:22,200
we can see the requests made,
886
00:35:22,200 --> 00:35:23,420
HTTP failures,
887
00:35:23,420 --> 00:35:25,550
active virtual users, RPS,
888
00:35:25,550 --> 00:35:27,290
and average response time.
889
00:35:27,290 --> 00:35:28,880
And if we hover over the graph below,
890
00:35:28,880 --> 00:35:30,570
we can see the active virtual users
891
00:35:30,570 --> 00:35:32,540
as they begin to enter our test script,
892
00:35:32,540 --> 00:35:35,050
response time metric,
request rating metric,
893
00:35:35,050 --> 00:35:37,020
and failed request rate.
894
00:35:37,020 --> 00:35:37,860
Directly below that,
895
00:35:37,860 --> 00:35:39,780
we have our performance insights.
896
00:35:39,780 --> 00:35:42,660
These are k6's own proprietary algorithms
897
00:35:42,660 --> 00:35:44,000
that we've developed in-house,
898
00:35:44,000 --> 00:35:46,400
what they'll do is scan
through your results data,
899
00:35:46,400 --> 00:35:50,220
if we detect any anomalies
behind your results data,
900
00:35:50,220 --> 00:35:51,530
we'll make some suggestions
901
00:35:51,530 --> 00:35:53,570
on how you could remediate those issues
902
00:35:53,570 --> 00:35:55,123
with inside of your test.
903
00:35:56,200 --> 00:35:57,800
So, taking a look here,
904
00:35:57,800 --> 00:36:01,210
we have about four minutes
left behind our tests,
905
00:36:01,210 --> 00:36:03,493
so maybe we want to
come down, take a look.
906
00:36:04,560 --> 00:36:06,873
So, I have three
thresholds behind my test,
907
00:36:06,873 --> 00:36:08,610
my test is continuing to run.
908
00:36:08,610 --> 00:36:10,883
Everything looks good for now.
909
00:36:11,800 --> 00:36:15,090
So, maybe you want to
come back to the terminal.
910
00:36:15,090 --> 00:36:19,080
I see that my console log of
the note is console logging,
911
00:36:19,080 --> 00:36:21,100
and this can get a little noisy.
912
00:36:21,100 --> 00:36:23,210
And we also see our tests
running in real time,
913
00:36:23,210 --> 00:36:25,360
so everything's working as it should,
914
00:36:25,360 --> 00:36:27,360
none of my thresholds have been exceeded
915
00:36:27,360 --> 00:36:29,023
to the point of failure yet.
916
00:36:29,920 --> 00:36:31,430
Okay so, I'm coming back here,
917
00:36:31,430 --> 00:36:33,410
taking a look at my thresholds.
918
00:36:33,410 --> 00:36:35,160
I can open these thresholds up,
919
00:36:35,160 --> 00:36:37,993
I see that my check failure
rating has been exceeded.
920
00:36:39,170 --> 00:36:41,140
It exceeded that rate of 30%,
921
00:36:41,140 --> 00:36:43,670
returning a rate closer to 42%,
922
00:36:43,670 --> 00:36:45,250
letting me know that there's a problem
923
00:36:45,250 --> 00:36:47,100
with my check failure rating.
924
00:36:47,100 --> 00:36:50,023
What I can do now is add this
chart to our ANALYSIS tab.
925
00:36:51,260 --> 00:36:53,670
Also, coming over and
taking a look at our checks,
926
00:36:53,670 --> 00:36:56,180
I see that our homepage body
size is completely failing,
927
00:36:56,180 --> 00:36:57,980
however, our logged in welcome header
928
00:36:57,980 --> 00:37:00,690
is populating about a third of the time.
929
00:37:00,690 --> 00:37:01,750
I'm gonna open that up,
930
00:37:01,750 --> 00:37:04,190
add it to our ANALYSIS tab,
931
00:37:04,190 --> 00:37:07,130
and then wrapping up with
our HTTP request here.
932
00:37:07,130 --> 00:37:10,900
I wanna take a look at
my initial HTTP request
933
00:37:10,900 --> 00:37:11,950
from with inside of here,
934
00:37:11,950 --> 00:37:13,970
I have the ability to change this metric
935
00:37:13,970 --> 00:37:15,130
from the response time,
936
00:37:15,130 --> 00:37:17,440
to the request rating
response time percentiles,
937
00:37:17,440 --> 00:37:19,090
or the timing breakdown.
938
00:37:19,090 --> 00:37:20,900
I have the ability to
change the aggregation
939
00:37:20,900 --> 00:37:22,320
from the average to the median,
940
00:37:22,320 --> 00:37:23,650
min, max, standard deviation,
941
00:37:23,650 --> 00:37:25,830
or any of the 90th percentiles.
942
00:37:25,830 --> 00:37:27,330
And then I can also visualize this
943
00:37:27,330 --> 00:37:29,320
according to a specified load zone.
944
00:37:29,320 --> 00:37:31,770
Maybe we wanna visualize
this according to Columbus,
945
00:37:31,770 --> 00:37:34,340
as that's where we push
the most amount of load
946
00:37:34,340 --> 00:37:36,180
behind this test.
947
00:37:36,180 --> 00:37:38,550
So, I'm gonna add that
chart to our ANALYSIS tab,
948
00:37:38,550 --> 00:37:41,490
finally wrapping up with
our ANALYSIS tab here.
949
00:37:41,490 --> 00:37:43,820
So, we have the comparison chart here,
950
00:37:43,820 --> 00:37:46,740
that's populated with our
views, response time metric,
951
00:37:46,740 --> 00:37:48,980
request rating, and failed request rate.
952
00:37:48,980 --> 00:37:50,740
And now I want to correlate some metrics
953
00:37:50,740 --> 00:37:53,330
that I select behind my test.
954
00:37:53,330 --> 00:37:55,240
So, I'm gonna include
that check failure rating,
955
00:37:55,240 --> 00:37:56,750
logged in welcome header present,
956
00:37:56,750 --> 00:37:58,430
and timing breakdown.
957
00:37:58,430 --> 00:38:02,450
And then, as I hover over the
metrics that I've selected,
958
00:38:02,450 --> 00:38:06,260
I can view all the data in a
correlated and meaningful way,
959
00:38:06,260 --> 00:38:09,393
and I can derive those
insights from the metrics here.
960
00:38:11,340 --> 00:38:13,210
I also have the ability to add a metric.
961
00:38:13,210 --> 00:38:16,120
Maybe we wanna include
a group duration metric
962
00:38:16,120 --> 00:38:18,190
across all groups behind our test,
963
00:38:18,190 --> 00:38:20,703
I can also add that
quickly and easily here.
964
00:38:21,940 --> 00:38:26,760
Now, one thing I want to get
to before we run out of time,
965
00:38:26,760 --> 00:38:30,780
one of our viewers had a
question about exporting metrics.
966
00:38:30,780 --> 00:38:34,910
So, with that, we can export
by clicking this three arrow,
967
00:38:34,910 --> 00:38:36,480
we can share the test results
968
00:38:36,480 --> 00:38:39,310
by generating a shareable
link, as I discussed.
969
00:38:39,310 --> 00:38:40,380
We can copy that.
970
00:38:40,380 --> 00:38:42,793
And let's open up a blank page here.
971
00:38:43,850 --> 00:38:48,053
I'm gonna plug in that URL
with inside of our web browser.
972
00:38:50,800 --> 00:38:52,133
And here we can see our test results.
973
00:38:52,133 --> 00:38:57,133
- Hey, could you paste
that into the chat here
974
00:38:57,700 --> 00:38:58,942
so that I can put it in
the video chat as well?
975
00:38:58,942 --> 00:38:59,775
- Oh yeah.
976
00:39:01,803 --> 00:39:04,680
- Just so people can have a look as well.
977
00:39:04,680 --> 00:39:05,730
- Yeah, absolutely!
978
00:39:05,730 --> 00:39:07,680
So, and as I mentioned before,
979
00:39:07,680 --> 00:39:09,970
anybody in the organization
can access this,
980
00:39:09,970 --> 00:39:12,340
they don't need to be associated
981
00:39:12,340 --> 00:39:14,800
with the k6 Cloud subscription.
982
00:39:14,800 --> 00:39:17,000
Now, coming back with
inside of the web app,
983
00:39:17,870 --> 00:39:19,670
I'm gonna change gears a little bit,
984
00:39:19,670 --> 00:39:21,720
just to demonstrate that
we do have the ability
985
00:39:21,720 --> 00:39:25,390
to export our metrics in different ways.
986
00:39:25,390 --> 00:39:26,810
So, if we come back with inside of here,
987
00:39:26,810 --> 00:39:30,193
we can export our data with
inside of a CSV document.
988
00:39:31,680 --> 00:39:32,940
And then last, but not least,
989
00:39:32,940 --> 00:39:36,470
we also have that ability to
generate a PDF summary report.
990
00:39:36,470 --> 00:39:38,800
Extremely beneficial if you have any,
991
00:39:38,800 --> 00:39:43,410
let's say, high level executives,
board members, et cetera,
992
00:39:43,410 --> 00:39:46,130
that are not used to seeing the raw data
993
00:39:46,130 --> 00:39:49,250
and want to see it in a meaningful way.
994
00:39:49,250 --> 00:39:51,100
We have this executive PDF summary,
995
00:39:51,100 --> 00:39:54,000
we have a little bit of
customizability with inside of here.
996
00:39:54,000 --> 00:39:56,050
You have the ability to select the metrics
997
00:39:56,050 --> 00:39:58,420
for this given report.
998
00:39:58,420 --> 00:40:00,203
We can go ahead and add a note.
999
00:40:01,560 --> 00:40:02,560
And once we're ready,
1000
00:40:02,560 --> 00:40:04,920
we can generate our PDF here.
1001
00:40:04,920 --> 00:40:06,780
This will assemble all the custom metrics
1002
00:40:06,780 --> 00:40:08,560
that we've selected behind our report,
1003
00:40:08,560 --> 00:40:09,890
as well as any custom notes
1004
00:40:09,890 --> 00:40:13,970
that have been integrated with
inside of that given report.
1005
00:40:13,970 --> 00:40:16,470
It will arrive in the form of a download
1006
00:40:16,470 --> 00:40:18,080
with inside of your downloads folder,
1007
00:40:18,080 --> 00:40:20,620
and from there, you can
either print the document out,
1008
00:40:20,620 --> 00:40:21,960
upload it to a drive,
1009
00:40:21,960 --> 00:40:26,660
or attach it as an email to its recipient.
1010
00:40:26,660 --> 00:40:30,313
To show you the finalized
version of the report,
1011
00:40:31,460 --> 00:40:35,520
here we have our k6 Cloud
Demo Executive Summary Report,
1012
00:40:35,520 --> 00:40:37,830
and we have those
metrics that we selected,
1013
00:40:37,830 --> 00:40:40,363
and our notes saved successfully here.
1014
00:40:41,666 --> 00:40:43,840
- And if somebody wants
to get started with that,
1015
00:40:43,840 --> 00:40:47,763
and wants to book a demo with
you, or just a conversation,
1016
00:40:48,850 --> 00:40:51,080
what's the best way for them to do that?
1017
00:40:51,080 --> 00:40:53,880
- So, the easiest way would
be just sending an email
1018
00:40:53,880 --> 00:40:56,990
directly to either
[email protected],
1019
00:40:56,990 --> 00:41:01,760
or sending an email to me at
[email protected].
1020
00:41:01,760 --> 00:41:02,790
- Thank you, Bill,
1021
00:41:02,790 --> 00:41:05,460
and we'll see you next week.
- Yeah, absolutely.
1022
00:41:05,460 --> 00:41:06,710
Thanks everybody.
- Bye!