Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
yangql
googletest
Commits
33a0d4f6
Commit
33a0d4f6
authored
Sep 10, 2019
by
Abseil Team
Committed by
Shaindel Schwartz
Sep 10, 2019
Browse files
Googletest export
Some mechanical and grammatical fixes to the Primer. PiperOrigin-RevId: 268200783
parent
3a450398
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
42 additions
and
43 deletions
+42
-43
googletest/docs/primer.md
googletest/docs/primer.md
+42
-43
No files found.
googletest/docs/primer.md
View file @
33a0d4f6
...
@@ -5,9 +5,9 @@
...
@@ -5,9 +5,9 @@
*googletest*
helps you write better C++ tests.
*googletest*
helps you write better C++ tests.
googletest is a testing framework developed by the Testing Technology team with
googletest is a testing framework developed by the Testing Technology team with
Google's specific requirements and constraints in mind.
No matter w
hether you
Google's specific requirements and constraints in mind.
W
hether you
work on
work on
Linux, Windows, or a Mac, if you write C++ code, googletest can help
Linux, Windows, or a Mac, if you write C++ code, googletest can help
you. And it
you. And it
supports
*any*
kind of tests, not just unit tests.
supports
*any*
kind of tests, not just unit tests.
So what makes a good test, and how does googletest fit in? We believe:
So what makes a good test, and how does googletest fit in? We believe:
...
@@ -21,7 +21,7 @@ So what makes a good test, and how does googletest fit in? We believe:
...
@@ -21,7 +21,7 @@ So what makes a good test, and how does googletest fit in? We believe:
easy to maintain. Such consistency is especially helpful when people switch
easy to maintain. Such consistency is especially helpful when people switch
projects and start to work on a new code base.
projects and start to work on a new code base.
3.
Tests should be
*portable*
and
*reusable*
. Google has a lot of code that is
3.
Tests should be
*portable*
and
*reusable*
. Google has a lot of code that is
platform-neutral
,
its tests should also be platform-neutral. googletest
platform-neutral
;
its tests should also be platform-neutral. googletest
works on different OSes, with different compilers, with or without
works on different OSes, with different compilers, with or without
exceptions, so googletest tests can work with a variety of configurations.
exceptions, so googletest tests can work with a variety of configurations.
4.
When tests fail, they should provide as much
*information*
about the problem
4.
When tests fail, they should provide as much
*information*
about the problem
...
@@ -44,18 +44,17 @@ minutes to learn the basics and get started. So let's go!
...
@@ -44,18 +44,17 @@ minutes to learn the basics and get started. So let's go!
## Beware of the nomenclature
## Beware of the nomenclature
_Note:_
There might be some confusion of idea due to different
_Note:_
There might be some confusion arising from different definitions of the
definitions of the terms _Test_, _Test Case_ and _Test Suite_, so beware
terms _Test_, _Test Case_ and _Test Suite_, so beware of misunderstanding these.
of misunderstanding these.
Historically, googletest started to use the term _Test Case_ for grouping
Historically, googletest started to use the term _Test Case_ for grouping
related tests, whereas current publications including
the
International Software
related tests, whereas current publications
,
including International Software
Testing Qualifications Board (
[
ISTQB
](
http://www.istqb.org/
)
)
and various
Testing Qualifications Board (
[
ISTQB
](
http://www.istqb.org/
)
)
materials and
textbooks on
S
oftware
Q
uality use the term
_
[
Test Suite
][
istqb test suite
]
_ for
various
textbooks on
s
oftware
q
uality
,
use the term
this.
_[Test Suite][istqb test suite]_
for
this.
The related term _Test_, as it is used in
the
googletest,
is
correspond
ing to
The related term _Test_, as it is used in googletest, correspond
s to the term
the term
_
[
Test Case
][
istqb test case
]
_ of ISTQB and others.
_[Test Case][istqb test case]_
of ISTQB and others.
The term _Test_ is commonly of broad enough sense, including ISTQB's definition
The term _Test_ is commonly of broad enough sense, including ISTQB's definition
of _Test Case_, so it's not much of a problem here. But the term _Test Case_ as
of _Test Case_, so it's not much of a problem here. But the term _Test Case_ as
...
@@ -120,7 +119,7 @@ Depending on the nature of the leak, it may or may not be worth fixing - so keep
...
@@ -120,7 +119,7 @@ Depending on the nature of the leak, it may or may not be worth fixing - so keep
this in mind if you get a heap checker error in addition to assertion errors.
this in mind if you get a heap checker error in addition to assertion errors.
To provide a custom failure message, simply stream it into the macro using the
To provide a custom failure message, simply stream it into the macro using the
`<<`
operator
,
or a sequence of such operators. An example:
`<<`
operator or a sequence of such operators. An example:
```
c++
```
c++
ASSERT_EQ
(
x
.
size
(),
y
.
size
())
<<
"Vectors x and y are of unequal length"
;
ASSERT_EQ
(
x
.
size
(),
y
.
size
())
<<
"Vectors x and y are of unequal length"
;
...
@@ -166,16 +165,16 @@ Fatal assertion | Nonfatal assertion | Verifies
...
@@ -166,16 +165,16 @@ Fatal assertion | Nonfatal assertion | Verifies
Value arguments must be comparable by the assertion's comparison operator or
Value arguments must be comparable by the assertion's comparison operator or
you'll get a compiler error. We used to require the arguments to support the
you'll get a compiler error. We used to require the arguments to support the
`<<`
operator for streaming to an
`ostream`
, but
it'
s no longer necessary. If
`<<`
operator for streaming to an
`ostream`
, but
this i
s no longer necessary. If
`<<`
is supported, it will be called to print the arguments when the assertion
`<<`
is supported, it will be called to print the arguments when the assertion
fails; otherwise googletest will attempt to print them in the best way it can.
fails; otherwise googletest will attempt to print them in the best way it can.
For more details and how to customize the printing of the arguments, see
For more details and how to customize the printing of the arguments, see
the
[
documentation
](
../../googlemock/docs/cook_book.md#teaching-gmock-how-to-print-your-values
)
[
documentation
](
../../googlemock/docs/cook_book.md#teaching-gmock-how-to-print-your-values
)
.
These assertions can work with a user-defined type, but only if you define the
These assertions can work with a user-defined type, but only if you define the
corresponding comparison operator (e.g.
`==`
,
`<`
, etc
). Since this is
corresponding comparison operator (e.g.
,
`==`
or
`<`
). Since this is
discouraged
discouraged
by the Google
[
C++ Style
by the Google
Guide
](
https://google.github.io/styleguide/cppguide.html#Operator_Overloading
)
,
[
C++ Style
Guide
](
https://google.github.io/styleguide/cppguide.html#Operator_Overloading
)
,
you may need to use
`ASSERT_TRUE()`
or
`EXPECT_TRUE()`
to assert the equality of
you may need to use
`ASSERT_TRUE()`
or
`EXPECT_TRUE()`
to assert the equality of
two objects of a user-defined type.
two objects of a user-defined type.
...
@@ -185,8 +184,8 @@ values on failure.
...
@@ -185,8 +184,8 @@ values on failure.
Arguments are always evaluated exactly once. Therefore, it's OK for the
Arguments are always evaluated exactly once. Therefore, it's OK for the
arguments to have side effects. However, as with any ordinary C/C++ function,
arguments to have side effects. However, as with any ordinary C/C++ function,
the arguments' evaluation order is undefined (i.e. the compiler is free to
the arguments' evaluation order is undefined (i.e.
,
the compiler is free to
choose any order) and your code should not depend on any particular argument
choose any order)
,
and your code should not depend on any particular argument
evaluation order.
evaluation order.
`ASSERT_EQ()`
does pointer equality on pointers. If used on two C strings, it
`ASSERT_EQ()`
does pointer equality on pointers. If used on two C strings, it
...
@@ -199,7 +198,7 @@ objects, you should use `ASSERT_EQ`.
...
@@ -199,7 +198,7 @@ objects, you should use `ASSERT_EQ`.
When doing pointer comparisons use
`*_EQ(ptr, nullptr)`
and
`*_NE(ptr, nullptr)`
When doing pointer comparisons use
`*_EQ(ptr, nullptr)`
and
`*_NE(ptr, nullptr)`
instead of
`*_EQ(ptr, NULL)`
and
`*_NE(ptr, NULL)`
. This is because
`nullptr`
is
instead of
`*_EQ(ptr, NULL)`
and
`*_NE(ptr, NULL)`
. This is because
`nullptr`
is
typed while
`NULL`
is not. See
[
FAQ
](
faq.md
)
for more details.
typed
,
while
`NULL`
is not. See
the
[
FAQ
](
faq.md
)
for more details.
If you're working with floating point numbers, you may want to use the floating
If you're working with floating point numbers, you may want to use the floating
point variations of some of these macros in order to avoid problems caused by
point variations of some of these macros in order to avoid problems caused by
...
@@ -246,7 +245,7 @@ Advanced googletest Guide.
...
@@ -246,7 +245,7 @@ Advanced googletest Guide.
To create a test:
To create a test:
1.
Use the
`TEST()`
macro to define and name a test function
,
These are
1.
Use the
`TEST()`
macro to define and name a test function
.
These are
ordinary C++ functions that don't return a value.
ordinary C++ functions that don't return a value.
2.
In this function, along with any valid C++ statements you want to include,
2.
In this function, along with any valid C++ statements you want to include,
use the various googletest assertions to check values.
use the various googletest assertions to check values.
...
@@ -263,7 +262,7 @@ TEST(TestSuiteName, TestName) {
...
@@ -263,7 +262,7 @@ TEST(TestSuiteName, TestName) {
`TEST()`
arguments go from general to specific. The
*first*
argument is the name
`TEST()`
arguments go from general to specific. The
*first*
argument is the name
of the test suite, and the
*second*
argument is the test's name within the test
of the test suite, and the
*second*
argument is the test's name within the test
case. Both names must be valid C++ identifiers, and they should not contain
case. Both names must be valid C++ identifiers, and they should not contain
underscore (
`_`
). A test's
*full name*
consists of its containing test suite and
any
underscore
s
(
`_`
). A test's
*full name*
consists of its containing test suite and
its individual name. Tests from different test suites can have the same
its individual name. Tests from different test suites can have the same
individual name.
individual name.
...
@@ -290,7 +289,7 @@ TEST(FactorialTest, HandlesPositiveInput) {
...
@@ -290,7 +289,7 @@ TEST(FactorialTest, HandlesPositiveInput) {
}
}
```
```
googletest groups the test results by test suites, so logically
-
related tests
googletest groups the test results by test suites, so logically
related tests
should be in the same test suite; in other words, the first argument to their
should be in the same test suite; in other words, the first argument to their
`TEST()`
should be the same. In the above example, we have two tests,
`TEST()`
should be the same. In the above example, we have two tests,
`HandlesZeroInput`
and
`HandlesPositiveInput`
, that belong to the same test
`HandlesZeroInput`
and
`HandlesPositiveInput`
, that belong to the same test
...
@@ -305,18 +304,18 @@ for
...
@@ -305,18 +304,18 @@ for
## Test Fixtures: Using the Same Data Configuration for Multiple Tests {#same-data-multiple-tests}
## Test Fixtures: Using the Same Data Configuration for Multiple Tests {#same-data-multiple-tests}
If you find yourself writing two or more tests that operate on similar data, you
If you find yourself writing two or more tests that operate on similar data, you
can use a
*test fixture*
.
It
allows you to reuse the same configuration of
can use a
*test fixture*
.
This
allows you to reuse the same configuration of
objects for several different tests.
objects for several different tests.
To create a fixture:
To create a fixture:
1.
Derive a class from
`::testing::Test`
. Start its body with
`protected:`
as
1.
Derive a class from
`::testing::Test`
. Start its body with
`protected:`
,
as
we'll want to access fixture members from sub-classes.
we'll want to access fixture members from sub-classes.
2.
Inside the class, declare any objects you plan to use.
2.
Inside the class, declare any objects you plan to use.
3.
If necessary, write a default constructor or
`SetUp()`
function to prepare
3.
If necessary, write a default constructor or
`SetUp()`
function to prepare
the objects for each test. A common mistake is to spell
`SetUp()`
as
the objects for each test. A common mistake is to spell
`SetUp()`
as
**`Setup()`**
with a small
`u`
- Use
`override`
in C++11 to make sure you
**`Setup()`**
with a small
`u`
- Use
`override`
in C++11 to make sure you
spelled it correctly
spelled it correctly
.
4.
If necessary, write a destructor or
`TearDown()`
function to release any
4.
If necessary, write a destructor or
`TearDown()`
function to release any
resources you allocated in
`SetUp()`
. To learn when you should use the
resources you allocated in
`SetUp()`
. To learn when you should use the
constructor/destructor and when you should use
`SetUp()/TearDown()`
, read
constructor/destructor and when you should use
`SetUp()/TearDown()`
, read
...
@@ -344,9 +343,9 @@ Also, you must first define a test fixture class before using it in a
...
@@ -344,9 +343,9 @@ Also, you must first define a test fixture class before using it in a
`TEST_F()`
, or you'll get the compiler error "
`virtual outside class
`TEST_F()`
, or you'll get the compiler error "
`virtual outside class
declaration`
".
declaration`
".
For each test defined with
`TEST_F()`
, googletest will create a
*fresh*
test
For each test defined with
`TEST_F()`
, googletest will create a
*fresh*
test
fixture at runtime, immediately initialize it via
`SetUp()`
, run the test,
fixture at runtime, immediately initialize it via
`SetUp()`
, run the test,
clean up by calling
`TearDown()`
, and then delete the test fixture. Note that
clean up by calling
`TearDown()`
, and then delete the test fixture. Note that
different tests in the same test suite have different test fixture objects, and
different tests in the same test suite have different test fixture objects, and
googletest always deletes a test fixture before it creates the next one.
googletest always deletes a test fixture before it creates the next one.
googletest does
**not**
reuse the same test fixture for multiple tests. Any
googletest does
**not**
reuse the same test fixture for multiple tests. Any
...
@@ -424,9 +423,9 @@ would lead to a segfault when `n` is `NULL`.
...
@@ -424,9 +423,9 @@ would lead to a segfault when `n` is `NULL`.
When these tests run, the following happens:
When these tests run, the following happens:
1.
googletest constructs a
`QueueTest`
object (let's call it
`t1`
).
1.
googletest constructs a
`QueueTest`
object (let's call it
`t1`
).
2.
`t1.SetUp()`
initializes
`t1`
.
2.
`t1.SetUp()`
initializes
`t1`
.
3.
The first test (
`IsEmptyInitially`
) runs on
`t1`
.
3.
The first test (
`IsEmptyInitially`
) runs on
`t1`
.
4.
`t1.TearDown()`
cleans up after the test finishes.
4.
`t1.TearDown()`
cleans up after the test finishes.
5.
`t1`
is destructed.
5.
`t1`
is destructed.
6.
The above steps are repeated on another
`QueueTest`
object, this time
6.
The above steps are repeated on another
`QueueTest`
object, this time
...
@@ -440,14 +439,14 @@ When these tests run, the following happens:
...
@@ -440,14 +439,14 @@ When these tests run, the following happens:
unlike with many other C++ testing frameworks, you don't have to re-list all
unlike with many other C++ testing frameworks, you don't have to re-list all
your defined tests in order to run them.
your defined tests in order to run them.
After defining your tests, you can run them with
`RUN_ALL_TESTS()`
, which
After defining your tests, you can run them with
`RUN_ALL_TESTS()`
, which
returns
`0`
if all the tests are successful, or
`1`
otherwise. Note that
returns
`0`
if all the tests are successful, or
`1`
otherwise. Note that
`RUN_ALL_TESTS()`
runs
*all tests*
in your link unit
--
they can be from
`RUN_ALL_TESTS()`
runs
*all tests*
in your link unit--they can be from
different test suites, or even different source files.
different test suites, or even different source files.
When invoked, the
`RUN_ALL_TESTS()`
macro:
When invoked, the
`RUN_ALL_TESTS()`
macro:
*
Saves the state of all googletest flags
*
Saves the state of all googletest flags
.
*
Creates a test fixture object for the first test.
*
Creates a test fixture object for the first test.
...
@@ -459,7 +458,7 @@ When invoked, the `RUN_ALL_TESTS()` macro:
...
@@ -459,7 +458,7 @@ When invoked, the `RUN_ALL_TESTS()` macro:
*
Deletes the fixture.
*
Deletes the fixture.
*
Restores the state of all googletest flags
*
Restores the state of all googletest flags
.
*
Repeats the above steps for the next test, until all tests have run.
*
Repeats the above steps for the next test, until all tests have run.
...
@@ -472,7 +471,7 @@ If a fatal failure happens the subsequent steps will be skipped.
...
@@ -472,7 +471,7 @@ If a fatal failure happens the subsequent steps will be skipped.
> return the value of `RUN_ALL_TESTS()`.
> return the value of `RUN_ALL_TESTS()`.
>
>
> Also, you should call `RUN_ALL_TESTS()` only **once**. Calling it more than
> Also, you should call `RUN_ALL_TESTS()` only **once**. Calling it more than
> once conflicts with some advanced googletest features (e.g. thread-safe
> once conflicts with some advanced googletest features (e.g.
,
thread-safe
> [death tests](advanced.md#death-tests)) and thus is not supported.
> [death tests](advanced.md#death-tests)) and thus is not supported.
**Availability**
: Linux, Windows, Mac.
**Availability**
: Linux, Windows, Mac.
...
@@ -480,7 +479,7 @@ If a fatal failure happens the subsequent steps will be skipped.
...
@@ -480,7 +479,7 @@ If a fatal failure happens the subsequent steps will be skipped.
## Writing the main() Function
## Writing the main() Function
Write your own main() function, which should return the value of
Write your own main() function, which should return the value of
`RUN_ALL_TESTS()`
`RUN_ALL_TESTS()`
.
You can start from this boilerplate:
You can start from this boilerplate:
...
@@ -544,14 +543,14 @@ int main(int argc, char **argv) {
...
@@ -544,14 +543,14 @@ int main(int argc, char **argv) {
The
`::testing::InitGoogleTest()`
function parses the command line for
The
`::testing::InitGoogleTest()`
function parses the command line for
googletest flags, and removes all recognized flags. This allows the user to
googletest flags, and removes all recognized flags. This allows the user to
control a test program's behavior via various flags, which we'll cover in
control a test program's behavior via various flags, which we'll cover in
[
AdvancedGuide
](
advanced.md
)
. You
**must**
call this function before calling
the
[
AdvancedGuide
](
advanced.md
)
. You
**must**
call this function before calling
`RUN_ALL_TESTS()`
, or the flags won't be properly initialized.
`RUN_ALL_TESTS()`
, or the flags won't be properly initialized.
On Windows,
`InitGoogleTest()`
also works with wide strings, so it can be used
On Windows,
`InitGoogleTest()`
also works with wide strings, so it can be used
in programs compiled in
`UNICODE`
mode as well.
in programs compiled in
`UNICODE`
mode as well.
But maybe you think that writing all those main() functions is too much work? We
But maybe you think that writing all those main() functions is too much work? We
agree with you completely and that's why Google Test provides a basic
agree with you completely
,
and that's why Google Test provides a basic
implementation of main(). If it fits your needs, then just link your test with
implementation of main(). If it fits your needs, then just link your test with
gtest
\_
main library and you are good to go.
gtest
\_
main library and you are good to go.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment