You consent to me using your answers for data analysis
You consent to me using your answers for data analysis
You consent to me using your answers for data analysis
frequently asked questions (updated 2018/11/14)
What is the Grant-Brownsword function model?
In 1983, William Harold Grant, along with Magdala Thompson and Thomas E. Clarke, authored a book relating Jungian personality types to the Gospel by correlating Biblical themes to Jung’s functions. Titled From Image to Likeness: A Jungian Path in the Gospel Journey, the main purpose of this book was to encourage the reader to understand the importance and the meaning of «God’s image» and how to evoke it within you on a journey from image to likeness. But this work contained a tidbit that would come to shape typology today: a new psychological model.
Grant dubbed it the third major model, highlighting how it «views Jung’s functions and attitudes on the basis of a developmental typology.» This model was based on their observations from several hundred people involved in their retreats and workshops (frequently referenced as «R/W» throughout their preface) along with thousands of students from two universities; it specifically referred to four stages of development from the ages of six to fifty.
Grant understood his model was a deviation from conventional interpretations of Jung’s work and did not expect to «find support within the Jungian tradition». In his own words, «admittedly, it needed further testing.» Grant included his model in the book in order to encourage people to view their personalities not statically but dynamically.
Alan W. Brownsword would end up writing It Takes All Types! in 1987, utilizing Grant’s model «in accordance with» Myers-Briggs types. This is not actually the case; Brownsword seemed to share an incorrect belief with many personality theorists from his time about the nature of «Type,» and this caused him to commit categorical errors when interpreting Jungian theory and Myers’ work with the MBTI. When talking about the E/I orientations of the tertiary and inferior functions, Brownsword only says that «not all of students of Jung seem to agree with [the tertiary function sharing the same direction as the dominant function]» and dismisses the more accepted**** interpretation of Jung’s work claiming that the «tertiary function» would be introverted with a claim that «it just doesn’t seem to work that way.» Consider Brownsword’s model to be an awkward amalgamation of Jungian psychological types, Myers-Briggs theory, W.H. Grant’s third model, and his own interpretation of what’s really going on.
The function stack today originated with Grant and Brownsword, but has been popularized by figures like Linda Berens and Dario Nardi. There is a lot of history behind how this had come about, which you can read more about here: Full context: the cognitive functions.
**** the idea of having an «alternating stack» where the functions would be ordered IEIE or EIEI is fundamentally against how Jung described the function attitudes. Jung never made a stack template, but if he did, the directions would only ever work with two exclusive directions (i.e. IEEE, EEII, and IIIE would be acceptable, but not IEEI). Brownsword talked about how the «tertiary» function would be introverted according to Jungian analysts but he really meant that a function in that position would be introverted in their (correct) analysis of Jung’s work; «tertiary» functions are not a thing in Jung’s Psychological Types.
I don’t understand—how is all of this calculated?
I used to give the exact formulas for the calculations before, but I like the idea of the numbers themselves being publicly ambiguous. But I really don’t have a reason to be obscure about how the formulas are set up:
The Grant-Brownsword algorithm calculates a score for all sixteen possible types by adding up weighted totals for the dominant, auxiliary, and—very weakly—tertiary functions, then subtracting weighted inferior function totals in the final add-up. It would look something like this: a(dominant)+b(auxiliary)+c(tertiary)-d(inferior) = type_score
The axis-based algorithm will assume that there are no inferior functions in your stack, and that functions on opposite ends create axes that you would either prefer or not prefer, so in other words, your scores for Ne/Si are compared to Ni/Se, and the same thing goes for Se/Ni and Ni/Se. The algorithm then tries to figure out which one of those four «valued» functions you prefer should be dominant, and voila! You get your type.
Why isn’t my Myers-Briggs result the same as my function result?
Because they aren’t the same thing. Your Myers-Briggs result is based on the letter values assigned to each question (for example, agreeing with question #42 most significantly increases your E, N, and P scores even though it would give you 2 points for «Se») and your two other results are based only on the raw function algorithms. They are scored differently and mean different things.
How accurate is the test?
That really depends on what «accurate» means to you. My test is only meant to take your answers, run the formulas, and give you a result based on those formulas; this test would be 100% accurate solely with regards to that. Whether or not your result will be an accurate reflection of your «function type» or your Myers-Briggs type is up for you to decide.
But I should stress an important detail: I’ve received a little bit over 10k responses to date, and I’ve been able to compare purported Myers-Briggs types on this test with the types received on «raw» Form Q. Unfortunately, crossover data is scarce, and only about a tiny percentage of the slightly-less-than-10k responders (you can take tests more than once) have taken both the raw Form Q test and the function test. There is a slight NP/SJ bias in the margins, so I would seriously consider J for you if you scored «strong/clear N» and «undifferentiated» on J/P, or S if you scored «undifferentiated» and «strong/clear P,» etc. But my big problem is that I can’t offset the results with numerical addends or subtrahends because the gaps between these results are often relative, not absolute.
For now, I would just recommend interpreting your results with this in mind, but I may add a permalink for your results for inquiry purposes soon.
But your test is totally inaccurate! The questions suck, and I know I’m definitely not the type I got.
It’s really anyone’s guess what an «accurate» interpretation of the functions is, because such a thing doesn’t actually exist. I know, crazy. Maybe you think those definitions are absolutely wrong, maybe somebody else thinks those definitions are absolutely correct. There isn’t a consensus on what function theory is, and there frankly never will be.
But if you do think you have all the answers, I added an option for people to choose an accuracy score for the test—not of their results since they haven’t seen them—but for the questions in «assessing» your functions. It’s a little dumb because no one actually knows which question scores for which function before they get their results, but it would be a little wonky adding post-result data to already-submitted results. I’m sure there’s a way, and I’ll have to experiment with what works best.
You consent to me using your answers for data analysis
frequently asked questions (updated 2018/11/14)
What is the Grant-Brownsword function model?
In 1983, William Harold Grant, along with Magdala Thompson and Thomas E. Clarke, authored a book relating Jungian personality types to the Gospel by correlating Biblical themes to Jung’s functions. Titled From Image to Likeness: A Jungian Path in the Gospel Journey, the main purpose of this book was to encourage the reader to understand the importance and the meaning of «God’s image» and how to evoke it within you on a journey from image to likeness. But this work contained a tidbit that would come to shape typology today: a new psychological model.
Grant dubbed it the third major model, highlighting how it «views Jung’s functions and attitudes on the basis of a developmental typology.» This model was based on their observations from several hundred people involved in their retreats and workshops (frequently referenced as «R/W» throughout their preface) along with thousands of students from two universities; it specifically referred to four stages of development from the ages of six to fifty.
Grant understood his model was a deviation from conventional interpretations of Jung’s work and did not expect to «find support within the Jungian tradition». In his own words, «admittedly, it needed further testing.» Grant included his model in the book in order to encourage people to view their personalities not statically but dynamically.
Alan W. Brownsword would end up writing It Takes All Types! in 1987, utilizing Grant’s model «in accordance with» Myers-Briggs types. This is not actually the case; Brownsword seemed to share an incorrect belief with many personality theorists from his time about the nature of «Type,» and this caused him to commit categorical errors when interpreting Jungian theory and Myers’ work with the MBTI. When talking about the E/I orientations of the tertiary and inferior functions, Brownsword only says that «not all of students of Jung seem to agree with [the tertiary function sharing the same direction as the dominant function]» and dismisses the more accepted**** interpretation of Jung’s work claiming that the «tertiary function» would be introverted with a claim that «it just doesn’t seem to work that way.» Consider Brownsword’s model to be an awkward amalgamation of Jungian psychological types, Myers-Briggs theory, W.H. Grant’s third model, and his own interpretation of what’s really going on.
The function stack today originated with Grant and Brownsword, but has been popularized by figures like Linda Berens and Dario Nardi. There is a lot of history behind how this had come about, which you can read more about here: Full context: the cognitive functions.
**** the idea of having an «alternating stack» where the functions would be ordered IEIE or EIEI is fundamentally against how Jung described the function attitudes. Jung never made a stack template, but if he did, the directions would only ever work with two exclusive directions (i.e. IEEE, EEII, and IIIE would be acceptable, but not IEEI). Brownsword talked about how the «tertiary» function would be introverted according to Jungian analysts but he really meant that a function in that position would be introverted in their (correct) analysis of Jung’s work; «tertiary» functions are not a thing in Jung’s Psychological Types.
I don’t understand—how is all of this calculated?
I used to give the exact formulas for the calculations before, but I like the idea of the numbers themselves being publicly ambiguous. But I really don’t have a reason to be obscure about how the formulas are set up:
The Grant-Brownsword algorithm calculates a score for all sixteen possible types by adding up weighted totals for the dominant, auxiliary, and—very weakly—tertiary functions, then subtracting weighted inferior function totals in the final add-up. It would look something like this: a(dominant)+b(auxiliary)+c(tertiary)-d(inferior) = type_score
The axis-based algorithm will assume that there are no inferior functions in your stack, and that functions on opposite ends create axes that you would either prefer or not prefer, so in other words, your scores for Ne/Si are compared to Ni/Se, and the same thing goes for Se/Ni and Ni/Se. The algorithm then tries to figure out which one of those four «valued» functions you prefer should be dominant, and voila! You get your type.
Why isn’t my Myers-Briggs result the same as my function result?
Because they aren’t the same thing. Your Myers-Briggs result is based on the letter values assigned to each question (for example, agreeing with question #42 most significantly increases your E, N, and P scores even though it would give you 2 points for «Se») and your two other results are based only on the raw function algorithms. They are scored differently and mean different things.
How accurate is the test?
That really depends on what «accurate» means to you. My test is only meant to take your answers, run the formulas, and give you a result based on those formulas; this test would be 100% accurate solely with regards to that. Whether or not your result will be an accurate reflection of your «function type» or your Myers-Briggs type is up for you to decide.
But I should stress an important detail: I’ve received a little bit over 10k responses to date, and I’ve been able to compare purported Myers-Briggs types on this test with the types received on «raw» Form Q. Unfortunately, crossover data is scarce, and only about a tiny percentage of the slightly-less-than-10k responders (you can take tests more than once) have taken both the raw Form Q test and the function test. There is a slight NP/SJ bias in the margins, so I would seriously consider J for you if you scored «strong/clear N» and «undifferentiated» on J/P, or S if you scored «undifferentiated» and «strong/clear P,» etc. But my big problem is that I can’t offset the results with numerical addends or subtrahends because the gaps between these results are often relative, not absolute.
For now, I would just recommend interpreting your results with this in mind, but I may add a permalink for your results for inquiry purposes soon.
But your test is totally inaccurate! The questions suck, and I know I’m definitely not the type I got.
It’s really anyone’s guess what an «accurate» interpretation of the functions is, because such a thing doesn’t actually exist. I know, crazy. Maybe you think those definitions are absolutely wrong, maybe somebody else thinks those definitions are absolutely correct. There isn’t a consensus on what function theory is, and there frankly never will be.
But if you do think you have all the answers, I added an option for people to choose an accuracy score for the test—not of their results since they haven’t seen them—but for the questions in «assessing» your functions. It’s a little dumb because no one actually knows which question scores for which function before they get their results, but it would be a little wonky adding post-result data to already-submitted results. I’m sure there’s a way, and I’ll have to experiment with what works best.
You consent to me using your answers for data analysis
frequently asked questions (updated 2018/11/14)
What is the Grant-Brownsword function model?
In 1983, William Harold Grant, along with Magdala Thompson and Thomas E. Clarke, authored a book relating Jungian personality types to the Gospel by correlating Biblical themes to Jung’s functions. Titled From Image to Likeness: A Jungian Path in the Gospel Journey, the main purpose of this book was to encourage the reader to understand the importance and the meaning of «God’s image» and how to evoke it within you on a journey from image to likeness. But this work contained a tidbit that would come to shape typology today: a new psychological model.
Grant dubbed it the third major model, highlighting how it «views Jung’s functions and attitudes on the basis of a developmental typology.» This model was based on their observations from several hundred people involved in their retreats and workshops (frequently referenced as «R/W» throughout their preface) along with thousands of students from two universities; it specifically referred to four stages of development from the ages of six to fifty.
Grant understood his model was a deviation from conventional interpretations of Jung’s work and did not expect to «find support within the Jungian tradition». In his own words, «admittedly, it needed further testing.» Grant included his model in the book in order to encourage people to view their personalities not statically but dynamically.
Alan W. Brownsword would end up writing It Takes All Types! in 1987, utilizing Grant’s model «in accordance with» Myers-Briggs types. This is not actually the case; Brownsword seemed to share an incorrect belief with many personality theorists from his time about the nature of «Type,» and this caused him to commit categorical errors when interpreting Jungian theory and Myers’ work with the MBTI. When talking about the E/I orientations of the tertiary and inferior functions, Brownsword only says that «not all of students of Jung seem to agree with [the tertiary function sharing the same direction as the dominant function]» and dismisses the more accepted**** interpretation of Jung’s work claiming that the «tertiary function» would be introverted with a claim that «it just doesn’t seem to work that way.» Consider Brownsword’s model to be an awkward amalgamation of Jungian psychological types, Myers-Briggs theory, W.H. Grant’s third model, and his own interpretation of what’s really going on.
The function stack today originated with Grant and Brownsword, but has been popularized by figures like Linda Berens and Dario Nardi. There is a lot of history behind how this had come about, which you can read more about here: Full context: the cognitive functions.
**** the idea of having an «alternating stack» where the functions would be ordered IEIE or EIEI is fundamentally against how Jung described the function attitudes. Jung never made a stack template, but if he did, the directions would only ever work with two exclusive directions (i.e. IEEE, EEII, and IIIE would be acceptable, but not IEEI). Brownsword talked about how the «tertiary» function would be introverted according to Jungian analysts but he really meant that a function in that position would be introverted in their (correct) analysis of Jung’s work; «tertiary» functions are not a thing in Jung’s Psychological Types.
I don’t understand—how is all of this calculated?
I used to give the exact formulas for the calculations before, but I like the idea of the numbers themselves being publicly ambiguous. But I really don’t have a reason to be obscure about how the formulas are set up:
The Grant-Brownsword algorithm calculates a score for all sixteen possible types by adding up weighted totals for the dominant, auxiliary, and—very weakly—tertiary functions, then subtracting weighted inferior function totals in the final add-up. It would look something like this: a(dominant)+b(auxiliary)+c(tertiary)-d(inferior) = type_score
The axis-based algorithm will assume that there are no inferior functions in your stack, and that functions on opposite ends create axes that you would either prefer or not prefer, so in other words, your scores for Ne/Si are compared to Ni/Se, and the same thing goes for Se/Ni and Ni/Se. The algorithm then tries to figure out which one of those four «valued» functions you prefer should be dominant, and voila! You get your type.
Why isn’t my Myers-Briggs result the same as my function result?
Because they aren’t the same thing. Your Myers-Briggs result is based on the letter values assigned to each question (for example, agreeing with question #42 most significantly increases your E, N, and P scores even though it would give you 2 points for «Se») and your two other results are based only on the raw function algorithms. They are scored differently and mean different things.
How accurate is the test?
That really depends on what «accurate» means to you. My test is only meant to take your answers, run the formulas, and give you a result based on those formulas; this test would be 100% accurate solely with regards to that. Whether or not your result will be an accurate reflection of your «function type» or your Myers-Briggs type is up for you to decide.
But I should stress an important detail: I’ve received a little bit over 10k responses to date, and I’ve been able to compare purported Myers-Briggs types on this test with the types received on «raw» Form Q. Unfortunately, crossover data is scarce, and only about a tiny percentage of the slightly-less-than-10k responders (you can take tests more than once) have taken both the raw Form Q test and the function test. There is a slight NP/SJ bias in the margins, so I would seriously consider J for you if you scored «strong/clear N» and «undifferentiated» on J/P, or S if you scored «undifferentiated» and «strong/clear P,» etc. But my big problem is that I can’t offset the results with numerical addends or subtrahends because the gaps between these results are often relative, not absolute.
For now, I would just recommend interpreting your results with this in mind, but I may add a permalink for your results for inquiry purposes soon.
But your test is totally inaccurate! The questions suck, and I know I’m definitely not the type I got.
It’s really anyone’s guess what an «accurate» interpretation of the functions is, because such a thing doesn’t actually exist. I know, crazy. Maybe you think those definitions are absolutely wrong, maybe somebody else thinks those definitions are absolutely correct. There isn’t a consensus on what function theory is, and there frankly never will be.
But if you do think you have all the answers, I added an option for people to choose an accuracy score for the test—not of their results since they haven’t seen them—but for the questions in «assessing» your functions. It’s a little dumb because no one actually knows which question scores for which function before they get their results, but it would be a little wonky adding post-result data to already-submitted results. I’m sure there’s a way, and I’ll have to experiment with what works best.
You consent to me using your answers for data analysis
frequently asked questions (updated 2018/11/14)
What is the Grant-Brownsword function model?
In 1983, William Harold Grant, along with Magdala Thompson and Thomas E. Clarke, authored a book relating Jungian personality types to the Gospel by correlating Biblical themes to Jung’s functions. Titled From Image to Likeness: A Jungian Path in the Gospel Journey, the main purpose of this book was to encourage the reader to understand the importance and the meaning of «God’s image» and how to evoke it within you on a journey from image to likeness. But this work contained a tidbit that would come to shape typology today: a new psychological model.
Grant dubbed it the third major model, highlighting how it «views Jung’s functions and attitudes on the basis of a developmental typology.» This model was based on their observations from several hundred people involved in their retreats and workshops (frequently referenced as «R/W» throughout their preface) along with thousands of students from two universities; it specifically referred to four stages of development from the ages of six to fifty.
Grant understood his model was a deviation from conventional interpretations of Jung’s work and did not expect to «find support within the Jungian tradition». In his own words, «admittedly, it needed further testing.» Grant included his model in the book in order to encourage people to view their personalities not statically but dynamically.
Alan W. Brownsword would end up writing It Takes All Types! in 1987, utilizing Grant’s model «in accordance with» Myers-Briggs types. This is not actually the case; Brownsword seemed to share an incorrect belief with many personality theorists from his time about the nature of «Type,» and this caused him to commit categorical errors when interpreting Jungian theory and Myers’ work with the MBTI. When talking about the E/I orientations of the tertiary and inferior functions, Brownsword only says that «not all of students of Jung seem to agree with [the tertiary function sharing the same direction as the dominant function]» and dismisses the more accepted**** interpretation of Jung’s work claiming that the «tertiary function» would be introverted with a claim that «it just doesn’t seem to work that way.» Consider Brownsword’s model to be an awkward amalgamation of Jungian psychological types, Myers-Briggs theory, W.H. Grant’s third model, and his own interpretation of what’s really going on.
The function stack today originated with Grant and Brownsword, but has been popularized by figures like Linda Berens and Dario Nardi. There is a lot of history behind how this had come about, which you can read more about here: Full context: the cognitive functions.
**** the idea of having an «alternating stack» where the functions would be ordered IEIE or EIEI is fundamentally against how Jung described the function attitudes. Jung never made a stack template, but if he did, the directions would only ever work with two exclusive directions (i.e. IEEE, EEII, and IIIE would be acceptable, but not IEEI). Brownsword talked about how the «tertiary» function would be introverted according to Jungian analysts but he really meant that a function in that position would be introverted in their (correct) analysis of Jung’s work; «tertiary» functions are not a thing in Jung’s Psychological Types.
I don’t understand—how is all of this calculated?
I used to give the exact formulas for the calculations before, but I like the idea of the numbers themselves being publicly ambiguous. But I really don’t have a reason to be obscure about how the formulas are set up:
The Grant-Brownsword algorithm calculates a score for all sixteen possible types by adding up weighted totals for the dominant, auxiliary, and—very weakly—tertiary functions, then subtracting weighted inferior function totals in the final add-up. It would look something like this: a(dominant)+b(auxiliary)+c(tertiary)-d(inferior) = type_score
The axis-based algorithm will assume that there are no inferior functions in your stack, and that functions on opposite ends create axes that you would either prefer or not prefer, so in other words, your scores for Ne/Si are compared to Ni/Se, and the same thing goes for Se/Ni and Ni/Se. The algorithm then tries to figure out which one of those four «valued» functions you prefer should be dominant, and voila! You get your type.
Why isn’t my Myers-Briggs result the same as my function result?
Because they aren’t the same thing. Your Myers-Briggs result is based on the letter values assigned to each question (for example, agreeing with question #42 most significantly increases your E, N, and P scores even though it would give you 2 points for «Se») and your two other results are based only on the raw function algorithms. They are scored differently and mean different things.
How accurate is the test?
That really depends on what «accurate» means to you. My test is only meant to take your answers, run the formulas, and give you a result based on those formulas; this test would be 100% accurate solely with regards to that. Whether or not your result will be an accurate reflection of your «function type» or your Myers-Briggs type is up for you to decide.
But I should stress an important detail: I’ve received a little bit over 10k responses to date, and I’ve been able to compare purported Myers-Briggs types on this test with the types received on «raw» Form Q. Unfortunately, crossover data is scarce, and only about a tiny percentage of the slightly-less-than-10k responders (you can take tests more than once) have taken both the raw Form Q test and the function test. There is a slight NP/SJ bias in the margins, so I would seriously consider J for you if you scored «strong/clear N» and «undifferentiated» on J/P, or S if you scored «undifferentiated» and «strong/clear P,» etc. But my big problem is that I can’t offset the results with numerical addends or subtrahends because the gaps between these results are often relative, not absolute.
For now, I would just recommend interpreting your results with this in mind, but I may add a permalink for your results for inquiry purposes soon.
But your test is totally inaccurate! The questions suck, and I know I’m definitely not the type I got.
It’s really anyone’s guess what an «accurate» interpretation of the functions is, because such a thing doesn’t actually exist. I know, crazy. Maybe you think those definitions are absolutely wrong, maybe somebody else thinks those definitions are absolutely correct. There isn’t a consensus on what function theory is, and there frankly never will be.
But if you do think you have all the answers, I added an option for people to choose an accuracy score for the test—not of their results since they haven’t seen them—but for the questions in «assessing» your functions. It’s a little dumb because no one actually knows which question scores for which function before they get their results, but it would be a little wonky adding post-result data to already-submitted results. I’m sure there’s a way, and I’ll have to experiment with what works best.
You consent to me using your answers for data analysis
frequently asked questions (updated 2018/11/14)
What is the Grant-Brownsword function model?
In 1983, William Harold Grant, along with Magdala Thompson and Thomas E. Clarke, authored a book relating Jungian personality types to the Gospel by correlating Biblical themes to Jung’s functions. Titled From Image to Likeness: A Jungian Path in the Gospel Journey, the main purpose of this book was to encourage the reader to understand the importance and the meaning of «God’s image» and how to evoke it within you on a journey from image to likeness. But this work contained a tidbit that would come to shape typology today: a new psychological model.
Grant dubbed it the third major model, highlighting how it «views Jung’s functions and attitudes on the basis of a developmental typology.» This model was based on their observations from several hundred people involved in their retreats and workshops (frequently referenced as «R/W» throughout their preface) along with thousands of students from two universities; it specifically referred to four stages of development from the ages of six to fifty.
Grant understood his model was a deviation from conventional interpretations of Jung’s work and did not expect to «find support within the Jungian tradition». In his own words, «admittedly, it needed further testing.» Grant included his model in the book in order to encourage people to view their personalities not statically but dynamically.
Alan W. Brownsword would end up writing It Takes All Types! in 1987, utilizing Grant’s model «in accordance with» Myers-Briggs types. This is not actually the case; Brownsword seemed to share an incorrect belief with many personality theorists from his time about the nature of «Type,» and this caused him to commit categorical errors when interpreting Jungian theory and Myers’ work with the MBTI. When talking about the E/I orientations of the tertiary and inferior functions, Brownsword only says that «not all of students of Jung seem to agree with [the tertiary function sharing the same direction as the dominant function]» and dismisses the more accepted**** interpretation of Jung’s work claiming that the «tertiary function» would be introverted with a claim that «it just doesn’t seem to work that way.» Consider Brownsword’s model to be an awkward amalgamation of Jungian psychological types, Myers-Briggs theory, W.H. Grant’s third model, and his own interpretation of what’s really going on.
The function stack today originated with Grant and Brownsword, but has been popularized by figures like Linda Berens and Dario Nardi. There is a lot of history behind how this had come about, which you can read more about here: Full context: the cognitive functions.
**** the idea of having an «alternating stack» where the functions would be ordered IEIE or EIEI is fundamentally against how Jung described the function attitudes. Jung never made a stack template, but if he did, the directions would only ever work with two exclusive directions (i.e. IEEE, EEII, and IIIE would be acceptable, but not IEEI). Brownsword talked about how the «tertiary» function would be introverted according to Jungian analysts but he really meant that a function in that position would be introverted in their (correct) analysis of Jung’s work; «tertiary» functions are not a thing in Jung’s Psychological Types.
I don’t understand—how is all of this calculated?
I used to give the exact formulas for the calculations before, but I like the idea of the numbers themselves being publicly ambiguous. But I really don’t have a reason to be obscure about how the formulas are set up:
The Grant-Brownsword algorithm calculates a score for all sixteen possible types by adding up weighted totals for the dominant, auxiliary, and—very weakly—tertiary functions, then subtracting weighted inferior function totals in the final add-up. It would look something like this: a(dominant)+b(auxiliary)+c(tertiary)-d(inferior) = type_score
The axis-based algorithm will assume that there are no inferior functions in your stack, and that functions on opposite ends create axes that you would either prefer or not prefer, so in other words, your scores for Ne/Si are compared to Ni/Se, and the same thing goes for Se/Ni and Ni/Se. The algorithm then tries to figure out which one of those four «valued» functions you prefer should be dominant, and voila! You get your type.
Why isn’t my Myers-Briggs result the same as my function result?
Because they aren’t the same thing. Your Myers-Briggs result is based on the letter values assigned to each question (for example, agreeing with question #42 most significantly increases your E, N, and P scores even though it would give you 2 points for «Se») and your two other results are based only on the raw function algorithms. They are scored differently and mean different things.
How accurate is the test?
That really depends on what «accurate» means to you. My test is only meant to take your answers, run the formulas, and give you a result based on those formulas; this test would be 100% accurate solely with regards to that. Whether or not your result will be an accurate reflection of your «function type» or your Myers-Briggs type is up for you to decide.
But I should stress an important detail: I’ve received a little bit over 10k responses to date, and I’ve been able to compare purported Myers-Briggs types on this test with the types received on «raw» Form Q. Unfortunately, crossover data is scarce, and only about a tiny percentage of the slightly-less-than-10k responders (you can take tests more than once) have taken both the raw Form Q test and the function test. There is a slight NP/SJ bias in the margins, so I would seriously consider J for you if you scored «strong/clear N» and «undifferentiated» on J/P, or S if you scored «undifferentiated» and «strong/clear P,» etc. But my big problem is that I can’t offset the results with numerical addends or subtrahends because the gaps between these results are often relative, not absolute.
For now, I would just recommend interpreting your results with this in mind, but I may add a permalink for your results for inquiry purposes soon.
But your test is totally inaccurate! The questions suck, and I know I’m definitely not the type I got.
It’s really anyone’s guess what an «accurate» interpretation of the functions is, because such a thing doesn’t actually exist. I know, crazy. Maybe you think those definitions are absolutely wrong, maybe somebody else thinks those definitions are absolutely correct. There isn’t a consensus on what function theory is, and there frankly never will be.
But if you do think you have all the answers, I added an option for people to choose an accuracy score for the test—not of their results since they haven’t seen them—but for the questions in «assessing» your functions. It’s a little dumb because no one actually knows which question scores for which function before they get their results, but it would be a little wonky adding post-result data to already-submitted results. I’m sure there’s a way, and I’ll have to experiment with what works best.
You consent to me using your answers for data analysis
frequently asked questions (updated 2018/11/14)
What is the Grant-Brownsword function model?
In 1983, William Harold Grant, along with Magdala Thompson and Thomas E. Clarke, authored a book relating Jungian personality types to the Gospel by correlating Biblical themes to Jung’s functions. Titled From Image to Likeness: A Jungian Path in the Gospel Journey, the main purpose of this book was to encourage the reader to understand the importance and the meaning of «God’s image» and how to evoke it within you on a journey from image to likeness. But this work contained a tidbit that would come to shape typology today: a new psychological model.
Grant dubbed it the third major model, highlighting how it «views Jung’s functions and attitudes on the basis of a developmental typology.» This model was based on their observations from several hundred people involved in their retreats and workshops (frequently referenced as «R/W» throughout their preface) along with thousands of students from two universities; it specifically referred to four stages of development from the ages of six to fifty.
Grant understood his model was a deviation from conventional interpretations of Jung’s work and did not expect to «find support within the Jungian tradition». In his own words, «admittedly, it needed further testing.» Grant included his model in the book in order to encourage people to view their personalities not statically but dynamically.
Alan W. Brownsword would end up writing It Takes All Types! in 1987, utilizing Grant’s model «in accordance with» Myers-Briggs types. This is not actually the case; Brownsword seemed to share an incorrect belief with many personality theorists from his time about the nature of «Type,» and this caused him to commit categorical errors when interpreting Jungian theory and Myers’ work with the MBTI. When talking about the E/I orientations of the tertiary and inferior functions, Brownsword only says that «not all of students of Jung seem to agree with [the tertiary function sharing the same direction as the dominant function]» and dismisses the more accepted**** interpretation of Jung’s work claiming that the «tertiary function» would be introverted with a claim that «it just doesn’t seem to work that way.» Consider Brownsword’s model to be an awkward amalgamation of Jungian psychological types, Myers-Briggs theory, W.H. Grant’s third model, and his own interpretation of what’s really going on.
The function stack today originated with Grant and Brownsword, but has been popularized by figures like Linda Berens and Dario Nardi. There is a lot of history behind how this had come about, which you can read more about here: Full context: the cognitive functions.
**** the idea of having an «alternating stack» where the functions would be ordered IEIE or EIEI is fundamentally against how Jung described the function attitudes. Jung never made a stack template, but if he did, the directions would only ever work with two exclusive directions (i.e. IEEE, EEII, and IIIE would be acceptable, but not IEEI). Brownsword talked about how the «tertiary» function would be introverted according to Jungian analysts but he really meant that a function in that position would be introverted in their (correct) analysis of Jung’s work; «tertiary» functions are not a thing in Jung’s Psychological Types.
I don’t understand—how is all of this calculated?
I used to give the exact formulas for the calculations before, but I like the idea of the numbers themselves being publicly ambiguous. But I really don’t have a reason to be obscure about how the formulas are set up:
The Grant-Brownsword algorithm calculates a score for all sixteen possible types by adding up weighted totals for the dominant, auxiliary, and—very weakly—tertiary functions, then subtracting weighted inferior function totals in the final add-up. It would look something like this: a(dominant)+b(auxiliary)+c(tertiary)-d(inferior) = type_score
The axis-based algorithm will assume that there are no inferior functions in your stack, and that functions on opposite ends create axes that you would either prefer or not prefer, so in other words, your scores for Ne/Si are compared to Ni/Se, and the same thing goes for Se/Ni and Ni/Se. The algorithm then tries to figure out which one of those four «valued» functions you prefer should be dominant, and voila! You get your type.
Why isn’t my Myers-Briggs result the same as my function result?
Because they aren’t the same thing. Your Myers-Briggs result is based on the letter values assigned to each question (for example, agreeing with question #42 most significantly increases your E, N, and P scores even though it would give you 2 points for «Se») and your two other results are based only on the raw function algorithms. They are scored differently and mean different things.
How accurate is the test?
That really depends on what «accurate» means to you. My test is only meant to take your answers, run the formulas, and give you a result based on those formulas; this test would be 100% accurate solely with regards to that. Whether or not your result will be an accurate reflection of your «function type» or your Myers-Briggs type is up for you to decide.
But I should stress an important detail: I’ve received a little bit over 10k responses to date, and I’ve been able to compare purported Myers-Briggs types on this test with the types received on «raw» Form Q. Unfortunately, crossover data is scarce, and only about a tiny percentage of the slightly-less-than-10k responders (you can take tests more than once) have taken both the raw Form Q test and the function test. There is a slight NP/SJ bias in the margins, so I would seriously consider J for you if you scored «strong/clear N» and «undifferentiated» on J/P, or S if you scored «undifferentiated» and «strong/clear P,» etc. But my big problem is that I can’t offset the results with numerical addends or subtrahends because the gaps between these results are often relative, not absolute.
For now, I would just recommend interpreting your results with this in mind, but I may add a permalink for your results for inquiry purposes soon.
But your test is totally inaccurate! The questions suck, and I know I’m definitely not the type I got.
It’s really anyone’s guess what an «accurate» interpretation of the functions is, because such a thing doesn’t actually exist. I know, crazy. Maybe you think those definitions are absolutely wrong, maybe somebody else thinks those definitions are absolutely correct. There isn’t a consensus on what function theory is, and there frankly never will be.
But if you do think you have all the answers, I added an option for people to choose an accuracy score for the test—not of their results since they haven’t seen them—but for the questions in «assessing» your functions. It’s a little dumb because no one actually knows which question scores for which function before they get their results, but it would be a little wonky adding post-result data to already-submitted results. I’m sure there’s a way, and I’ll have to experiment with what works best.
You consent to me using your answers for data analysis
frequently asked questions (updated 2018/11/14)
What is the Grant-Brownsword function model?
In 1983, William Harold Grant, along with Magdala Thompson and Thomas E. Clarke, authored a book relating Jungian personality types to the Gospel by correlating Biblical themes to Jung’s functions. Titled From Image to Likeness: A Jungian Path in the Gospel Journey, the main purpose of this book was to encourage the reader to understand the importance and the meaning of «God’s image» and how to evoke it within you on a journey from image to likeness. But this work contained a tidbit that would come to shape typology today: a new psychological model.
Grant dubbed it the third major model, highlighting how it «views Jung’s functions and attitudes on the basis of a developmental typology.» This model was based on their observations from several hundred people involved in their retreats and workshops (frequently referenced as «R/W» throughout their preface) along with thousands of students from two universities; it specifically referred to four stages of development from the ages of six to fifty.
Grant understood his model was a deviation from conventional interpretations of Jung’s work and did not expect to «find support within the Jungian tradition». In his own words, «admittedly, it needed further testing.» Grant included his model in the book in order to encourage people to view their personalities not statically but dynamically.
Alan W. Brownsword would end up writing It Takes All Types! in 1987, utilizing Grant’s model «in accordance with» Myers-Briggs types. This is not actually the case; Brownsword seemed to share an incorrect belief with many personality theorists from his time about the nature of «Type,» and this caused him to commit categorical errors when interpreting Jungian theory and Myers’ work with the MBTI. When talking about the E/I orientations of the tertiary and inferior functions, Brownsword only says that «not all of students of Jung seem to agree with [the tertiary function sharing the same direction as the dominant function]» and dismisses the more accepted**** interpretation of Jung’s work claiming that the «tertiary function» would be introverted with a claim that «it just doesn’t seem to work that way.» Consider Brownsword’s model to be an awkward amalgamation of Jungian psychological types, Myers-Briggs theory, W.H. Grant’s third model, and his own interpretation of what’s really going on.
The function stack today originated with Grant and Brownsword, but has been popularized by figures like Linda Berens and Dario Nardi. There is a lot of history behind how this had come about, which you can read more about here: Full context: the cognitive functions.
**** the idea of having an «alternating stack» where the functions would be ordered IEIE or EIEI is fundamentally against how Jung described the function attitudes. Jung never made a stack template, but if he did, the directions would only ever work with two exclusive directions (i.e. IEEE, EEII, and IIIE would be acceptable, but not IEEI). Brownsword talked about how the «tertiary» function would be introverted according to Jungian analysts but he really meant that a function in that position would be introverted in their (correct) analysis of Jung’s work; «tertiary» functions are not a thing in Jung’s Psychological Types.
I don’t understand—how is all of this calculated?
I used to give the exact formulas for the calculations before, but I like the idea of the numbers themselves being publicly ambiguous. But I really don’t have a reason to be obscure about how the formulas are set up:
The Grant-Brownsword algorithm calculates a score for all sixteen possible types by adding up weighted totals for the dominant, auxiliary, and—very weakly—tertiary functions, then subtracting weighted inferior function totals in the final add-up. It would look something like this: a(dominant)+b(auxiliary)+c(tertiary)-d(inferior) = type_score
The axis-based algorithm will assume that there are no inferior functions in your stack, and that functions on opposite ends create axes that you would either prefer or not prefer, so in other words, your scores for Ne/Si are compared to Ni/Se, and the same thing goes for Se/Ni and Ni/Se. The algorithm then tries to figure out which one of those four «valued» functions you prefer should be dominant, and voila! You get your type.
Why isn’t my Myers-Briggs result the same as my function result?
Because they aren’t the same thing. Your Myers-Briggs result is based on the letter values assigned to each question (for example, agreeing with question #42 most significantly increases your E, N, and P scores even though it would give you 2 points for «Se») and your two other results are based only on the raw function algorithms. They are scored differently and mean different things.
How accurate is the test?
That really depends on what «accurate» means to you. My test is only meant to take your answers, run the formulas, and give you a result based on those formulas; this test would be 100% accurate solely with regards to that. Whether or not your result will be an accurate reflection of your «function type» or your Myers-Briggs type is up for you to decide.
But I should stress an important detail: I’ve received a little bit over 10k responses to date, and I’ve been able to compare purported Myers-Briggs types on this test with the types received on «raw» Form Q. Unfortunately, crossover data is scarce, and only about a tiny percentage of the slightly-less-than-10k responders (you can take tests more than once) have taken both the raw Form Q test and the function test. There is a slight NP/SJ bias in the margins, so I would seriously consider J for you if you scored «strong/clear N» and «undifferentiated» on J/P, or S if you scored «undifferentiated» and «strong/clear P,» etc. But my big problem is that I can’t offset the results with numerical addends or subtrahends because the gaps between these results are often relative, not absolute.
For now, I would just recommend interpreting your results with this in mind, but I may add a permalink for your results for inquiry purposes soon.
But your test is totally inaccurate! The questions suck, and I know I’m definitely not the type I got.
It’s really anyone’s guess what an «accurate» interpretation of the functions is, because such a thing doesn’t actually exist. I know, crazy. Maybe you think those definitions are absolutely wrong, maybe somebody else thinks those definitions are absolutely correct. There isn’t a consensus on what function theory is, and there frankly never will be.
But if you do think you have all the answers, I added an option for people to choose an accuracy score for the test—not of their results since they haven’t seen them—but for the questions in «assessing» your functions. It’s a little dumb because no one actually knows which question scores for which function before they get their results, but it would be a little wonky adding post-result data to already-submitted results. I’m sure there’s a way, and I’ll have to experiment with what works best.
You consent to me using your answers for data analysis
frequently asked questions (updated 2018/11/14)
What is the Grant-Brownsword function model?
In 1983, William Harold Grant, along with Magdala Thompson and Thomas E. Clarke, authored a book relating Jungian personality types to the Gospel by correlating Biblical themes to Jung’s functions. Titled From Image to Likeness: A Jungian Path in the Gospel Journey, the main purpose of this book was to encourage the reader to understand the importance and the meaning of «God’s image» and how to evoke it within you on a journey from image to likeness. But this work contained a tidbit that would come to shape typology today: a new psychological model.
Grant dubbed it the third major model, highlighting how it «views Jung’s functions and attitudes on the basis of a developmental typology.» This model was based on their observations from several hundred people involved in their retreats and workshops (frequently referenced as «R/W» throughout their preface) along with thousands of students from two universities; it specifically referred to four stages of development from the ages of six to fifty.
Grant understood his model was a deviation from conventional interpretations of Jung’s work and did not expect to «find support within the Jungian tradition». In his own words, «admittedly, it needed further testing.» Grant included his model in the book in order to encourage people to view their personalities not statically but dynamically.
Alan W. Brownsword would end up writing It Takes All Types! in 1987, utilizing Grant’s model «in accordance with» Myers-Briggs types. This is not actually the case; Brownsword seemed to share an incorrect belief with many personality theorists from his time about the nature of «Type,» and this caused him to commit categorical errors when interpreting Jungian theory and Myers’ work with the MBTI. When talking about the E/I orientations of the tertiary and inferior functions, Brownsword only says that «not all of students of Jung seem to agree with [the tertiary function sharing the same direction as the dominant function]» and dismisses the more accepted**** interpretation of Jung’s work claiming that the «tertiary function» would be introverted with a claim that «it just doesn’t seem to work that way.» Consider Brownsword’s model to be an awkward amalgamation of Jungian psychological types, Myers-Briggs theory, W.H. Grant’s third model, and his own interpretation of what’s really going on.
The function stack today originated with Grant and Brownsword, but has been popularized by figures like Linda Berens and Dario Nardi. There is a lot of history behind how this had come about, which you can read more about here: Full context: the cognitive functions.
**** the idea of having an «alternating stack» where the functions would be ordered IEIE or EIEI is fundamentally against how Jung described the function attitudes. Jung never made a stack template, but if he did, the directions would only ever work with two exclusive directions (i.e. IEEE, EEII, and IIIE would be acceptable, but not IEEI). Brownsword talked about how the «tertiary» function would be introverted according to Jungian analysts but he really meant that a function in that position would be introverted in their (correct) analysis of Jung’s work; «tertiary» functions are not a thing in Jung’s Psychological Types.
I don’t understand—how is all of this calculated?
I used to give the exact formulas for the calculations before, but I like the idea of the numbers themselves being publicly ambiguous. But I really don’t have a reason to be obscure about how the formulas are set up:
The Grant-Brownsword algorithm calculates a score for all sixteen possible types by adding up weighted totals for the dominant, auxiliary, and—very weakly—tertiary functions, then subtracting weighted inferior function totals in the final add-up. It would look something like this: a(dominant)+b(auxiliary)+c(tertiary)-d(inferior) = type_score
The axis-based algorithm will assume that there are no inferior functions in your stack, and that functions on opposite ends create axes that you would either prefer or not prefer, so in other words, your scores for Ne/Si are compared to Ni/Se, and the same thing goes for Se/Ni and Ni/Se. The algorithm then tries to figure out which one of those four «valued» functions you prefer should be dominant, and voila! You get your type.
Why isn’t my Myers-Briggs result the same as my function result?
Because they aren’t the same thing. Your Myers-Briggs result is based on the letter values assigned to each question (for example, agreeing with question #42 most significantly increases your E, N, and P scores even though it would give you 2 points for «Se») and your two other results are based only on the raw function algorithms. They are scored differently and mean different things.
How accurate is the test?
That really depends on what «accurate» means to you. My test is only meant to take your answers, run the formulas, and give you a result based on those formulas; this test would be 100% accurate solely with regards to that. Whether or not your result will be an accurate reflection of your «function type» or your Myers-Briggs type is up for you to decide.
But I should stress an important detail: I’ve received a little bit over 10k responses to date, and I’ve been able to compare purported Myers-Briggs types on this test with the types received on «raw» Form Q. Unfortunately, crossover data is scarce, and only about a tiny percentage of the slightly-less-than-10k responders (you can take tests more than once) have taken both the raw Form Q test and the function test. There is a slight NP/SJ bias in the margins, so I would seriously consider J for you if you scored «strong/clear N» and «undifferentiated» on J/P, or S if you scored «undifferentiated» and «strong/clear P,» etc. But my big problem is that I can’t offset the results with numerical addends or subtrahends because the gaps between these results are often relative, not absolute.
For now, I would just recommend interpreting your results with this in mind, but I may add a permalink for your results for inquiry purposes soon.
But your test is totally inaccurate! The questions suck, and I know I’m definitely not the type I got.
It’s really anyone’s guess what an «accurate» interpretation of the functions is, because such a thing doesn’t actually exist. I know, crazy. Maybe you think those definitions are absolutely wrong, maybe somebody else thinks those definitions are absolutely correct. There isn’t a consensus on what function theory is, and there frankly never will be.
But if you do think you have all the answers, I added an option for people to choose an accuracy score for the test—not of their results since they haven’t seen them—but for the questions in «assessing» your functions. It’s a little dumb because no one actually knows which question scores for which function before they get their results, but it would be a little wonky adding post-result data to already-submitted results. I’m sure there’s a way, and I’ll have to experiment with what works best.
You consent to me using your answers for data analysis
frequently asked questions (updated 2018/11/14)
What is the Grant-Brownsword function model?
In 1983, William Harold Grant, along with Magdala Thompson and Thomas E. Clarke, authored a book relating Jungian personality types to the Gospel by correlating Biblical themes to Jung’s functions. Titled From Image to Likeness: A Jungian Path in the Gospel Journey, the main purpose of this book was to encourage the reader to understand the importance and the meaning of «God’s image» and how to evoke it within you on a journey from image to likeness. But this work contained a tidbit that would come to shape typology today: a new psychological model.
Grant dubbed it the third major model, highlighting how it «views Jung’s functions and attitudes on the basis of a developmental typology.» This model was based on their observations from several hundred people involved in their retreats and workshops (frequently referenced as «R/W» throughout their preface) along with thousands of students from two universities; it specifically referred to four stages of development from the ages of six to fifty.
Grant understood his model was a deviation from conventional interpretations of Jung’s work and did not expect to «find support within the Jungian tradition». In his own words, «admittedly, it needed further testing.» Grant included his model in the book in order to encourage people to view their personalities not statically but dynamically.
Alan W. Brownsword would end up writing It Takes All Types! in 1987, utilizing Grant’s model «in accordance with» Myers-Briggs types. This is not actually the case; Brownsword seemed to share an incorrect belief with many personality theorists from his time about the nature of «Type,» and this caused him to commit categorical errors when interpreting Jungian theory and Myers’ work with the MBTI. When talking about the E/I orientations of the tertiary and inferior functions, Brownsword only says that «not all of students of Jung seem to agree with [the tertiary function sharing the same direction as the dominant function]» and dismisses the more accepted**** interpretation of Jung’s work claiming that the «tertiary function» would be introverted with a claim that «it just doesn’t seem to work that way.» Consider Brownsword’s model to be an awkward amalgamation of Jungian psychological types, Myers-Briggs theory, W.H. Grant’s third model, and his own interpretation of what’s really going on.
The function stack today originated with Grant and Brownsword, but has been popularized by figures like Linda Berens and Dario Nardi. There is a lot of history behind how this had come about, which you can read more about here: Full context: the cognitive functions.
**** the idea of having an «alternating stack» where the functions would be ordered IEIE or EIEI is fundamentally against how Jung described the function attitudes. Jung never made a stack template, but if he did, the directions would only ever work with two exclusive directions (i.e. IEEE, EEII, and IIIE would be acceptable, but not IEEI). Brownsword talked about how the «tertiary» function would be introverted according to Jungian analysts but he really meant that a function in that position would be introverted in their (correct) analysis of Jung’s work; «tertiary» functions are not a thing in Jung’s Psychological Types.
I don’t understand—how is all of this calculated?
I used to give the exact formulas for the calculations before, but I like the idea of the numbers themselves being publicly ambiguous. But I really don’t have a reason to be obscure about how the formulas are set up:
The Grant-Brownsword algorithm calculates a score for all sixteen possible types by adding up weighted totals for the dominant, auxiliary, and—very weakly—tertiary functions, then subtracting weighted inferior function totals in the final add-up. It would look something like this: a(dominant)+b(auxiliary)+c(tertiary)-d(inferior) = type_score
The axis-based algorithm will assume that there are no inferior functions in your stack, and that functions on opposite ends create axes that you would either prefer or not prefer, so in other words, your scores for Ne/Si are compared to Ni/Se, and the same thing goes for Se/Ni and Ni/Se. The algorithm then tries to figure out which one of those four «valued» functions you prefer should be dominant, and voila! You get your type.
Why isn’t my Myers-Briggs result the same as my function result?
Because they aren’t the same thing. Your Myers-Briggs result is based on the letter values assigned to each question (for example, agreeing with question #42 most significantly increases your E, N, and P scores even though it would give you 2 points for «Se») and your two other results are based only on the raw function algorithms. They are scored differently and mean different things.
How accurate is the test?
That really depends on what «accurate» means to you. My test is only meant to take your answers, run the formulas, and give you a result based on those formulas; this test would be 100% accurate solely with regards to that. Whether or not your result will be an accurate reflection of your «function type» or your Myers-Briggs type is up for you to decide.
But I should stress an important detail: I’ve received a little bit over 10k responses to date, and I’ve been able to compare purported Myers-Briggs types on this test with the types received on «raw» Form Q. Unfortunately, crossover data is scarce, and only about a tiny percentage of the slightly-less-than-10k responders (you can take tests more than once) have taken both the raw Form Q test and the function test. There is a slight NP/SJ bias in the margins, so I would seriously consider J for you if you scored «strong/clear N» and «undifferentiated» on J/P, or S if you scored «undifferentiated» and «strong/clear P,» etc. But my big problem is that I can’t offset the results with numerical addends or subtrahends because the gaps between these results are often relative, not absolute.
For now, I would just recommend interpreting your results with this in mind, but I may add a permalink for your results for inquiry purposes soon.
But your test is totally inaccurate! The questions suck, and I know I’m definitely not the type I got.
It’s really anyone’s guess what an «accurate» interpretation of the functions is, because such a thing doesn’t actually exist. I know, crazy. Maybe you think those definitions are absolutely wrong, maybe somebody else thinks those definitions are absolutely correct. There isn’t a consensus on what function theory is, and there frankly never will be.
But if you do think you have all the answers, I added an option for people to choose an accuracy score for the test—not of their results since they haven’t seen them—but for the questions in «assessing» your functions. It’s a little dumb because no one actually knows which question scores for which function before they get their results, but it would be a little wonky adding post-result data to already-submitted results. I’m sure there’s a way, and I’ll have to experiment with what works best.
You consent to me using your answers for data analysis
frequently asked questions (updated 2018/11/14)
What is the Grant-Brownsword function model?
In 1983, William Harold Grant, along with Magdala Thompson and Thomas E. Clarke, authored a book relating Jungian personality types to the Gospel by correlating Biblical themes to Jung’s functions. Titled From Image to Likeness: A Jungian Path in the Gospel Journey, the main purpose of this book was to encourage the reader to understand the importance and the meaning of «God’s image» and how to evoke it within you on a journey from image to likeness. But this work contained a tidbit that would come to shape typology today: a new psychological model.
Grant dubbed it the third major model, highlighting how it «views Jung’s functions and attitudes on the basis of a developmental typology.» This model was based on their observations from several hundred people involved in their retreats and workshops (frequently referenced as «R/W» throughout their preface) along with thousands of students from two universities; it specifically referred to four stages of development from the ages of six to fifty.
Grant understood his model was a deviation from conventional interpretations of Jung’s work and did not expect to «find support within the Jungian tradition». In his own words, «admittedly, it needed further testing.» Grant included his model in the book in order to encourage people to view their personalities not statically but dynamically.
Alan W. Brownsword would end up writing It Takes All Types! in 1987, utilizing Grant’s model «in accordance with» Myers-Briggs types. This is not actually the case; Brownsword seemed to share an incorrect belief with many personality theorists from his time about the nature of «Type,» and this caused him to commit categorical errors when interpreting Jungian theory and Myers’ work with the MBTI. When talking about the E/I orientations of the tertiary and inferior functions, Brownsword only says that «not all of students of Jung seem to agree with [the tertiary function sharing the same direction as the dominant function]» and dismisses the more accepted**** interpretation of Jung’s work claiming that the «tertiary function» would be introverted with a claim that «it just doesn’t seem to work that way.» Consider Brownsword’s model to be an awkward amalgamation of Jungian psychological types, Myers-Briggs theory, W.H. Grant’s third model, and his own interpretation of what’s really going on.
The function stack today originated with Grant and Brownsword, but has been popularized by figures like Linda Berens and Dario Nardi. There is a lot of history behind how this had come about, which you can read more about here: Full context: the cognitive functions.
**** the idea of having an «alternating stack» where the functions would be ordered IEIE or EIEI is fundamentally against how Jung described the function attitudes. Jung never made a stack template, but if he did, the directions would only ever work with two exclusive directions (i.e. IEEE, EEII, and IIIE would be acceptable, but not IEEI). Brownsword talked about how the «tertiary» function would be introverted according to Jungian analysts but he really meant that a function in that position would be introverted in their (correct) analysis of Jung’s work; «tertiary» functions are not a thing in Jung’s Psychological Types.
I don’t understand—how is all of this calculated?
I used to give the exact formulas for the calculations before, but I like the idea of the numbers themselves being publicly ambiguous. But I really don’t have a reason to be obscure about how the formulas are set up:
The Grant-Brownsword algorithm calculates a score for all sixteen possible types by adding up weighted totals for the dominant, auxiliary, and—very weakly—tertiary functions, then subtracting weighted inferior function totals in the final add-up. It would look something like this: a(dominant)+b(auxiliary)+c(tertiary)-d(inferior) = type_score
The axis-based algorithm will assume that there are no inferior functions in your stack, and that functions on opposite ends create axes that you would either prefer or not prefer, so in other words, your scores for Ne/Si are compared to Ni/Se, and the same thing goes for Se/Ni and Ni/Se. The algorithm then tries to figure out which one of those four «valued» functions you prefer should be dominant, and voila! You get your type.
Why isn’t my Myers-Briggs result the same as my function result?
Because they aren’t the same thing. Your Myers-Briggs result is based on the letter values assigned to each question (for example, agreeing with question #42 most significantly increases your E, N, and P scores even though it would give you 2 points for «Se») and your two other results are based only on the raw function algorithms. They are scored differently and mean different things.
How accurate is the test?
That really depends on what «accurate» means to you. My test is only meant to take your answers, run the formulas, and give you a result based on those formulas; this test would be 100% accurate solely with regards to that. Whether or not your result will be an accurate reflection of your «function type» or your Myers-Briggs type is up for you to decide.
But I should stress an important detail: I’ve received a little bit over 10k responses to date, and I’ve been able to compare purported Myers-Briggs types on this test with the types received on «raw» Form Q. Unfortunately, crossover data is scarce, and only about a tiny percentage of the slightly-less-than-10k responders (you can take tests more than once) have taken both the raw Form Q test and the function test. There is a slight NP/SJ bias in the margins, so I would seriously consider J for you if you scored «strong/clear N» and «undifferentiated» on J/P, or S if you scored «undifferentiated» and «strong/clear P,» etc. But my big problem is that I can’t offset the results with numerical addends or subtrahends because the gaps between these results are often relative, not absolute.
For now, I would just recommend interpreting your results with this in mind, but I may add a permalink for your results for inquiry purposes soon.
But your test is totally inaccurate! The questions suck, and I know I’m definitely not the type I got.
It’s really anyone’s guess what an «accurate» interpretation of the functions is, because such a thing doesn’t actually exist. I know, crazy. Maybe you think those definitions are absolutely wrong, maybe somebody else thinks those definitions are absolutely correct. There isn’t a consensus on what function theory is, and there frankly never will be.
But if you do think you have all the answers, I added an option for people to choose an accuracy score for the test—not of their results since they haven’t seen them—but for the questions in «assessing» your functions. It’s a little dumb because no one actually knows which question scores for which function before they get their results, but it would be a little wonky adding post-result data to already-submitted results. I’m sure there’s a way, and I’ll have to experiment with what works best.
You consent to me using your answers for data analysis
frequently asked questions (updated 2018/11/14)
What is the Grant-Brownsword function model?
In 1983, William Harold Grant, along with Magdala Thompson and Thomas E. Clarke, authored a book relating Jungian personality types to the Gospel by correlating Biblical themes to Jung’s functions. Titled From Image to Likeness: A Jungian Path in the Gospel Journey, the main purpose of this book was to encourage the reader to understand the importance and the meaning of «God’s image» and how to evoke it within you on a journey from image to likeness. But this work contained a tidbit that would come to shape typology today: a new psychological model.
Grant dubbed it the third major model, highlighting how it «views Jung’s functions and attitudes on the basis of a developmental typology.» This model was based on their observations from several hundred people involved in their retreats and workshops (frequently referenced as «R/W» throughout their preface) along with thousands of students from two universities; it specifically referred to four stages of development from the ages of six to fifty.
Grant understood his model was a deviation from conventional interpretations of Jung’s work and did not expect to «find support within the Jungian tradition». In his own words, «admittedly, it needed further testing.» Grant included his model in the book in order to encourage people to view their personalities not statically but dynamically.
Alan W. Brownsword would end up writing It Takes All Types! in 1987, utilizing Grant’s model «in accordance with» Myers-Briggs types. This is not actually the case; Brownsword seemed to share an incorrect belief with many personality theorists from his time about the nature of «Type,» and this caused him to commit categorical errors when interpreting Jungian theory and Myers’ work with the MBTI. When talking about the E/I orientations of the tertiary and inferior functions, Brownsword only says that «not all of students of Jung seem to agree with [the tertiary function sharing the same direction as the dominant function]» and dismisses the more accepted**** interpretation of Jung’s work claiming that the «tertiary function» would be introverted with a claim that «it just doesn’t seem to work that way.» Consider Brownsword’s model to be an awkward amalgamation of Jungian psychological types, Myers-Briggs theory, W.H. Grant’s third model, and his own interpretation of what’s really going on.
The function stack today originated with Grant and Brownsword, but has been popularized by figures like Linda Berens and Dario Nardi. There is a lot of history behind how this had come about, which you can read more about here: Full context: the cognitive functions.
**** the idea of having an «alternating stack» where the functions would be ordered IEIE or EIEI is fundamentally against how Jung described the function attitudes. Jung never made a stack template, but if he did, the directions would only ever work with two exclusive directions (i.e. IEEE, EEII, and IIIE would be acceptable, but not IEEI). Brownsword talked about how the «tertiary» function would be introverted according to Jungian analysts but he really meant that a function in that position would be introverted in their (correct) analysis of Jung’s work; «tertiary» functions are not a thing in Jung’s Psychological Types.
I don’t understand—how is all of this calculated?
I used to give the exact formulas for the calculations before, but I like the idea of the numbers themselves being publicly ambiguous. But I really don’t have a reason to be obscure about how the formulas are set up:
The Grant-Brownsword algorithm calculates a score for all sixteen possible types by adding up weighted totals for the dominant, auxiliary, and—very weakly—tertiary functions, then subtracting weighted inferior function totals in the final add-up. It would look something like this: a(dominant)+b(auxiliary)+c(tertiary)-d(inferior) = type_score
The axis-based algorithm will assume that there are no inferior functions in your stack, and that functions on opposite ends create axes that you would either prefer or not prefer, so in other words, your scores for Ne/Si are compared to Ni/Se, and the same thing goes for Se/Ni and Ni/Se. The algorithm then tries to figure out which one of those four «valued» functions you prefer should be dominant, and voila! You get your type.
Why isn’t my Myers-Briggs result the same as my function result?
Because they aren’t the same thing. Your Myers-Briggs result is based on the letter values assigned to each question (for example, agreeing with question #42 most significantly increases your E, N, and P scores even though it would give you 2 points for «Se») and your two other results are based only on the raw function algorithms. They are scored differently and mean different things.
How accurate is the test?
That really depends on what «accurate» means to you. My test is only meant to take your answers, run the formulas, and give you a result based on those formulas; this test would be 100% accurate solely with regards to that. Whether or not your result will be an accurate reflection of your «function type» or your Myers-Briggs type is up for you to decide.
But I should stress an important detail: I’ve received a little bit over 10k responses to date, and I’ve been able to compare purported Myers-Briggs types on this test with the types received on «raw» Form Q. Unfortunately, crossover data is scarce, and only about a tiny percentage of the slightly-less-than-10k responders (you can take tests more than once) have taken both the raw Form Q test and the function test. There is a slight NP/SJ bias in the margins, so I would seriously consider J for you if you scored «strong/clear N» and «undifferentiated» on J/P, or S if you scored «undifferentiated» and «strong/clear P,» etc. But my big problem is that I can’t offset the results with numerical addends or subtrahends because the gaps between these results are often relative, not absolute.
For now, I would just recommend interpreting your results with this in mind, but I may add a permalink for your results for inquiry purposes soon.
But your test is totally inaccurate! The questions suck, and I know I’m definitely not the type I got.
It’s really anyone’s guess what an «accurate» interpretation of the functions is, because such a thing doesn’t actually exist. I know, crazy. Maybe you think those definitions are absolutely wrong, maybe somebody else thinks those definitions are absolutely correct. There isn’t a consensus on what function theory is, and there frankly never will be.
But if you do think you have all the answers, I added an option for people to choose an accuracy score for the test—not of their results since they haven’t seen them—but for the questions in «assessing» your functions. It’s a little dumb because no one actually knows which question scores for which function before they get their results, but it would be a little wonky adding post-result data to already-submitted results. I’m sure there’s a way, and I’ll have to experiment with what works best.
You consent to me using your answers for data analysis
interpreting your function test results
Disclaimer: This page serves to help you understand the meaning behind your responses. If you truly understand typology and its merits, I think you’ll find the information provided here—not the results on the function test—to be revealing of your test-taking habits.
Before we start, however, I must make clear to you that not all of your questions will be answered. This page is an experiment in computer generated meta-analysis—it serves to automatically interpret your data based on patterns in testing that I have noticed personally. It remains subjective.
This section is also a work-in-progress! Only the first part has been finished so far. Both function models currently used (the Grant/Brownsword model and the Myers model) are not talked about at all. In the meantime… enjoy!
As things are set up right now, you won’t be able to return to this page if your browser session ends or expires. Functionality will eventually be added that will give you a unique identifier to return to this page.
I think what I was most surprised by when I launched this test back in April 2018 was how easily people accepted it into the personality test world. Many people were confused by what their results meant, yes, but the format of the test was never fundamentally called into question—at least, not in any way that impacted how widely the test was shared.
I bring this up because this test has more layers than would initially seem. I think a typology veteran well aware of what the cognitive functions are would easily recognize the basic format of the test: a 96 question test that asks 12 questions to test for each of the eight functions. The mystery, it would seem, is at the very end—when everything finally gets calculated and the test gives you a few types.
The information for you to understand what this test really means is all out there. It isn’t exactly accessible, though. This website has many disjointed, messy articles about the meaning of type that I’d written as a teenager, and making sense of the underlying perspective behind all that can be… an arduous task. The two-year-old «frequently asked questions» section underneath your results is dense and often superfluous, so it doesn’t surprise me now that people weren’t really able to piece together the mystery of the Sakinorva cognitive function test.
Some people have made serious efforts, though. I’ve seen social media posts and blog posts (in many different languages, even) trying to explain in detail what the results exactly mean, and some people are definitely on the right track. I’m sure some people have even figured it out—but they certainly haven’t set the tone of the conversation. There’s an entire culture created around the cognitive functions, and challenging it isn’t exactly easy.
Before continuing, I strongly suggest reading Full context: the cognitive functions to get an idea of how this function test and the following analysis will look at your results. It may help you understand what the test really is beyond «a cognitive function test» and will familiarize you with some of the language used here to describe your results. Bear in mind that this isn’t necessary, but it will aid you in understanding the perspective from which all of this is passed onto you.
I often read lines about how «the cognitive functions are the real MBTI» or how people «have been reading up and learning about the cognitive functions for years» and just about always reveal that they have been exposed only to sources that give one side of the story; what such people often don’t realize is that in order to really understand the cognitive functions, they must be willing to challenge the dogma that proliferates misinformation about what it means to be you.
There are two things you should try to keep in mind:
1) The actual cognitive functions test doesn’t take your responses at face value, and it “thinks around” your answers.
2) This analysis page works the same way.
This analysis page serves to both demystify the results provided to you and to look at them beyond what they usually mean at face value. Just like the test itself, extrapolation and guesswork will come into play. This analysis will look not just at what you give it, but also how you give it. It’s not about you, as the answers would tell me, but about you, as the test-taker filling out a form about yourself.
We should go over the very basics first. You received several different types of results, and we should try to understand what they all mean one-by-one. We will begin with the two different function types, starting with your Grant/Brownsword result:
| (grant) function type | ENFP |
If you aren’t familiar with it, the Grant/Brownsword function model was a model of Jungian type dynamics created by William Harold Grant in his book From Image to Likeness: A Jungian Path in the Gospel Journey with Magdala Thompson and Thomas E. Clarke. The most significant thing about Grant’s function model was that it was the first to break away from the Jungian convention by flipping the third function (for example, an introvert would have their four functions ordered IEEE, but Grant turned it into IEIE). This is the model most commonly used today when people talk about «the cognitive functions.»
His model was developmental. He specifically uses the term «type development» to describe how you cognitively develop as you age. Grant divides up type development into four different periods. Your primary function (Ne) develops from 6 to 12 years of age. From 12 to 20 years of age, you develop your auxiliary function, Fi. From 20 to 35 years old, you develop Te. And finally—you come to develop your shadow side (Si) from 35 to 50 years old.
Grant’s model was a hypothesis! He acknowledged that it was a departure from Jungian convention, and didn’t anticipate it being widely accepted. It was based on his own observations in retreat/workshops at the time. Bear in mind, also, that Grant was a Jungian scholar, but also a Christian man. His work was primarily concerned with the spirituality of type and sought to show people that their personalities were not fixed, but dynamic and pointed toward self-development.
What about type? Luckily, Grant wanted to show us a picture of what his hypothetical types looked like—he included profiles.
I’ll now pretend to be Grant. As a young child, you were absorbed in the world of imagination, stimulated primarily by the social world rather than in solicitude. If you were an only child, you might have had an imaginary friend. With other children, you were often the one who stimulated them with new and exciting activities, being easily bored by routine, whether in play, work, or study. You may have been urged to be brought back down to the real world, and you might have been scolded for your disorderliness. Even as you did today’s tasks, you would have your mind what it would bring tomorrow.
In adolescence, you began to cultivate your feelings. As you developed your sense of compassion, your interests turned toward being of service to others and may have joined efforts to help the disadvantaged and underprivileged. You might have found yourself more committed to your traditions. Career-wise, you might have considered looking toward service-oriented opportunities.
Around twenty, you began wondering whether you had previously began to shape yourself on the basis of strongly held convictions rather than of being of service to others, as it had been a turn inward—in contrast, you began to develop a social attitude that brought you security in your own convictions and a sense of assertiveness. Though this behavior may have showed itself awkwardly, you believed the answer was to show more assertiveness rather than return to submissiveness.
Your life up to now had been so indulgent in the possible that your relationship with the sensory world had gone underdeveloped, failing to notice the details of the world around you. You now, however, began to take an interest in this world, picking up sensing-related hobbies such as sewing, crafting, or learning to play a musical instrument. Punctuality and neatness became important to you as you also began to reconcile with solitude, a departure from your tendency toward excitement and activity.
The book ends with a concluding observation that dramatizes the turn toward the shadow side at age thirty-five, evoking images of crisis. What we call the inferior function was not meant to be as fluid as the turning points between the first three functions in type development. I take note of this because I took it into account for the algorithm—your inferior function is added as a negative value at the end.
The test assumes you’re somewhere between the second and third stages of type development, even though many disregard the idea altogether today. I didn’t want to single out results based on age and figured a more universal model would cover most people anyway. If you’d like, you can maybe try to make out your own standing relative to type development given your function results. I won’t do anything automatic—do your own soul searching! Here they are again for your convenience:
| Ne (extraverted intuition) | 0 |
| Ni (introverted intuition) | 0 |
| Se (extraverted sensing) | 0 |
| Si (introverted sensing) | 0 |
| Te (extraverted thinking) | 0 |
| Ti (introverted thinking) | 0 |
| Fe (extraverted feeling) | 0 |
| Fi (introverted feeling) | 0 |
Remember that big chart with all those values next to the sixteen types? Those are percentage agreement values for your Grant type—your results were compared to each type outline and then listed in order, from your worst match to your best match. Here is that chart again:
| ENFP | 0 |
| ENTP | 0 |
| ESFP | 0 |
| ESTP | 0 |
| ENFJ | 0 |
| ENTJ | 0 |
| ESFJ | 0 |
| ESTJ | 0 |
| INFP | 0 |
| INTP | 0 |
| ISFP | 0 |
| ISTP | 0 |
| INFJ | 0 |
| INTJ | 0 |
| ISFJ | 0 |
| ISTJ | 0 |
Here are also the rest of the paraphrased Grant type descriptions, ordered from highest percentage to lowest. This is a long section, so you’ll get a button here to close descriptions, as well as a «table of contents» to jump to different descriptions.
(collapse/expand all descriptions)
As a child, you enjoyed experiencing the physical world around you, watching, listening, touching as you developed your relationship with the sensory. You discovered personal, internal connections to nature, collecting and remembering facts and information about the world. You were attentive and dependable, using your understanding of the sensory to pursue sensible interests and picking up hobbies such as physical sports, playing instruments, or working with your hands.
Around the age of twelve, you began to look outward toward wider relationships, finding yourself becoming more outgoing and invested in group activities. You became attuned to the needs of others, desiring to please and help. Your guilt and empathy in your honest endeavors to serve others may have attracted others to confide and trust in you.
In due time, you began to look inward to protect your own interests, feeling free to deny others’ demands. Those who had taken your generosity for granted may have been dismayed, and you sometimes wondered if you had become too hardened. Your attitude may have come across as harsh, but this turn inward allowed you to explore your true self, discovering new lines drawn between reason and faith. You may even have kept smaller company, preferring to extend to a trusted few rather than many.
As your shadow side emerged, you found yourself re-engaging with social life, a growing preference for imagination taking the lead away from a grounded present and exposing yourself to a world of speculation and daydreaming. Your previous worldly worries began to slip away as you found expansion in this new creativity.
In your early childhood, you were drawn to develop your inner sense of creativity. You might have had an imaginary friend with whom you spent time in dreamy silence. Few close friends were allowed to share with you your world of imagination. You were a big daydreamer, and teachers constantly reminded you to pay attention. You don’t remember the details of this time period very well, as they had not piqued your interest then—but you do remember the atmosphere and ambience of the feelings you’d had felt back then.
Around the age of twelve, you began to develop your thinking and greatly valued logic, analysis and truth. Mediating justice and fairness, you found yourself able to emotionally detach yourself from tense situations and bear roles of responsibility. Your orderly objectivity in approaching problems came off as surprising, especially among your peers who assumed you simply had difficulty in expressing your feelings.
With time, you became more attached to your feelings: more compassionate, subjective, and perhaps even more easily offended. This was a turn back inward in your life. Decisions previously made solely through reason were now driven also by sensitivity to others’ feelings, and your personal values helped guide you through them. Your feelings may not have been expressed outwardly, but their depth was felt in your decision-making.
You now began to experience your last function—sensing—as you started to notice details around you that you’d left unacknowledged. For the first time, you’d begun to take pleasure in exercising your senses, whether it be through playing an instrument, learning a craft, or collecting objects. You’d engaged in these activities with a newfound precision that contradicted the disorder you’d been used to, which you now grew impatient with.
In your early childhood, you were drawn to develop your inner sense of creativity. You might have had an imaginary friend with whom you spent time in dreamy silence. Few close friends were allowed to share with you your world of imagination. You were a big daydreamer, and teachers constantly reminded you to pay attention. You don’t remember the details of this time period very well, as they had not piqued your interest then—but you do remember the atmosphere and ambience of the feelings you’d had felt back then.
Around the age of twelve, you became aware of a desire to express yourself through a mode of feeling, even though you maintained your predominantly intuitive disposition. You became more aware of the needs of others, looking to help the poor, the suffering, and the underdogs. You may have joined groups committed to being of service to others, and you found it difficult to find time for yourself.
At twenty, you experienced a new desire to become more independent, searching for autonomy as you became critical of your previous submission to others. Because this attitude emerged internally, you found it difficult to express to others how you wished to shape yourself, and they may have been surprised or offended by the change. Despite perhaps feeling that this attitude had been developing poorly, you decided to hone it and allow you to eventually grow; rather than returning to submissiveness, you wanted to grow further into your assertiveness.
You now began to experience your last function—sensing—as you started to notice details around you that you’d left unacknowledged. For the first time, you’d begun to take pleasure in exercising your senses, whether it be through playing an instrument, learning a craft, or collecting objects. You’d engaged in these activities with a newfound precision that contradicted the disorder you’d been used to, which you now grew impatient with.
You began developing your thinking from an early age, organizing your internal world quietly and deliberately. You paid close attention to matters of logic and reason, searching for clarity and reasonability in the rules you had to follow—complying to what did not make sense to you was difficult. You rarely shared your thoughts with others, but you had a select few with whom you communicated, albeit deliberately and thoughtfully.
Around the age of twelve, you began to lose your shyness and found enjoyment in activities such as collecting and classifying things, and preferred hobbies that directly involved working with your hands, such as sewing, carpentry, or playing an instrument. Your ability to handle practical matters drew you to responsibilities involving managing efficiency, and you could show a great deal of focus doing them.
At twenty, you began to discover your readiness for creativity. You began looking into future possibilities, sometimes even in ways that were bizarre or unconventional. You became less detail-oriented and more forgetful, but you also found yourself more interested in the potential than the actual, probing inwardly and even daydreaming to find your imaginative side.
Around thirty-five years into your life, you began to yield to a newfound sensitivity for others, driven by personal values. This may have expressed itself awkwardly in its earlier stages, bringing embarrassment in social situations, but you gradually learned to express your compassion and accepted that not everything needed to be rational.
As a young child, you may have been obedient and considerate, choosing to please others rather than be a burden. You were likely to spend time by yourself, as your feelings were directed inwardly. You may have devalued your own interests, preferring to serve others and be praised. You felt obligated to keep harmony, taking responsibility whenever conflict arose.
Around the age of twelve, you began to lose your shyness and found enjoyment in activities such as collecting and classifying things, and preferred hobbies that directly involved working with your hands, such as sewing, carpentry, or playing an instrument. Your ability to handle practical matters drew you to responsibilities involving managing efficiency, and you could show a great deal of focus doing them.
At twenty, you began to discover your readiness for creativity. You began looking into future possibilities, sometimes being unable to handle minute details, especially earlier on in development. You became less detail-oriented and more forgetful, your need for efficiency taking a back seat as you began thinking more about the future.
Then began an awkward period where you found yourself drawn to assertive behavior, despite having been used to acting nearly the opposite. Detached and not desiring to please, you found yourself sometimes hostile or aggressive, showing resentment or rebelliousness for having been submissive to domination by others. You were now, however, determined to stand your ground despite being unhappy with the vigor you’d displayed doing so. You became less vulnerable to criticism and showed your own.
You began developing your thinking from an early age, organizing your internal world quietly and deliberately. You paid close attention to matters of logic and reason, searching for clarity and reasonability in the rules you had to follow—complying to what did not make sense to you was difficult. You rarely shared your thoughts with others, but you had a select few with whom you communicated, albeit deliberately and thoughtfully.
Around the age of twelve, you began to develop your intuition, looking to expand your imaginative realm. This attitude germinated in a social manner, as you found yourself becoming more outgoing, sharing your ideas often in lively discussion. Though you still preferred being alone, you found real joy in interacting with others as you began to orient yourself toward more imaginative ways of doing things and planning out their future. Your focus drifted away from actuality and more toward the essences of things. You might also have found it difficult to keep things in order, but you were able to probe for them to your own satisfaction.
At twenty, you found excitement in discovering the sensory world, which had previously been of little interest to you. This attitude was directed toward the interior, as it had been in your early childhood, and you found yourself enjoying activities such as walking through nature, playing an instrument, or working with your hands. You became more aware of your image to others, and you could be conscious of what others thought of you.
Around thirty-five years into your life, you began to yield to a newfound sensitivity for others, driven by personal values. This may have expressed itself awkwardly in its earlier stages, bringing embarrassment in social situations, but you gradually learned to express your compassion and accepted that not everything needed to be rational.
As a young child, you may have been obedient and considerate, choosing to please others rather than be a burden. You were likely to spend time by yourself, as your feelings were directed inwardly. You may have devalued your own interests, preferring to serve others and be praised. You felt obligated to keep harmony, taking responsibility whenever conflict arose.
Around the age of twelve, you began to develop your intuition, looking to expand your imaginative realm. This attitude germinated in a social manner, as you found yourself becoming more outgoing, sharing your ideas often in lively discussion. Though you still preferred being alone to cultivate your feelings, you found real joy in interacting with others as you began to orient yourself toward more imaginative ways of doing things and planning out their future. Your focus drifted away from actuality and more toward the essences of things. You might also have found it difficult to keep things in order, but you were able to probe for them to your own satisfaction.
At twenty, you found excitement in discovering the sensory world, which had previously been of little interest to you. This attitude was directed toward the interior, as it had been in your early childhood, and you found yourself enjoying activities such as walking through nature, playing an instrument, or working with your hands. You became more aware of your image to others, and you could be conscious of what others thought of you.
Then began an awkward period where you found yourself drawn to assertive behavior, despite having been used to acting nearly the opposite. Detached and not desiring to please, you found yourself sometimes hostile or aggressive, showing resentment or rebelliousness for having been submissive to domination by others. You were now, however, determined to stand your ground despite being unhappy with the vigor you’d displayed doing so. You became less vulnerable to criticism and showed your own.
As a child, you were outgoing and sought to make reason out a world of directives passed down to you, reluctant to follow orders unless you agreed with the logic behind them. Your decisions were guided by logical thinking detached from any need to please those around you, and fairness took priority. While you rarely did what you did not want to do, you maintained a strong sense of fairness that led you to do the right thing.
Going into adolescence, you found pleasure in physical activities such as sports, sewing, or playing a musical instrument, and you enjoyed collecting things. You were guided by a new internal orientation toward the sensory, treasuring facts, figures, and experienced knowledge. You may even have sound yourself comfortable with solitude, and perhaps had few close friends.
A difficult phase to grapple with, you may have found yourself struggling to understand your growing tendency to leave behind particulars, spending more time engaging with ideas. You enjoyed your discovered creativity, noticing that you would be the one to come up with new, unique ideas, inspired by engagement and discussion with those around you.
Around thirty-five years into your life, you began to develop your feeling. Through struggle, you found yourself making decisions based on your personal feelings, sometimes coming off as moody or arbitrary to those around you. However, with time, you realized your sensitivity for others, and with awareness and compassion, you found your tune in the world of feeling, even opening yourself up to vulnerability and finding yourself occasionally offended.
You were friendly, outgoing, and loving in your childhood and wished to please those around you, especially authority figures such as your parents. You showed your vulnerability and were sensitive to the needs of others, often assuming responsibility for others. Anger may have often led to self-blame, and breaking the rules may have led to shame and guilt. You primarily wished to keep everyone happy, and when not bearing the weight of responsibility, you relished the joy of being alive.
Going into adolescence, you found pleasure in physical activities such as sports, sewing, or playing a musical instrument, and you enjoyed collecting things. You may even have sound yourself comfortable with solitude, and perhaps had few close friends. You became focused on keenness, accuracy, and attention to detail in your work.
As you entered your twenties, you began to see the world through a more creative lens, recognizing possibilities in people. Unfortunately, you may have found yourself becoming more forgetful and distracted as you focused less on particulars—your worries, however, may have slipped away along with it as alternative solutions to problems made themselves clear to you. This was a side of your imagination that was stimulated primarily through engagement with people, a return toward the outward.
Around thirty-five years into your life, you found yourself in touch with a shadow side that saw it necessary for you to assert yourself, which you may have had before but without the same sense of urgency. You became able to refuse others’ demands more easily, and you turned toward the world of logic: rationality and reasonability became of more importance. Though you may have been viewed less benevolently as a result of this shift, you began to experience a new peace around the idea that you could choose to be generous and had greater control over life.
As a child, you were outgoing and sought to make reason out a world of directives passed down to you, reluctant to follow orders unless you agreed with the logic behind them. Your decisions were guided by logical thinking detached from any need to please those around you, and fairness took priority. While you rarely did what you did not want to do, you maintained a strong sense of fairness that led you to do the right thing.
Going into adolescence, you turned to the development of intuition, looking inward as you expanded your sense of imagination. This phase was marked by less concern for external management and a greater interest in internal exploration, sharing plans and goals with others. Though you may have seen yourself as forgetful and impractical, those peers who were most grounded could have been surprised by your down-to-earthiness.
You may have surprised yourself with a turn outward toward the present, a departure from a future-oriented disposition that employed the internal imagination to a focus on the immediate present. New interests began to arise—sports, handcrafts, musical instruments—and you shared them with others. You found yourself now valuing tidiness, punctuality, and accuracy, which had once been impossible for you to manage.
Around thirty-five years into your life, you began to develop your feeling. Through struggle, you found yourself making decisions based on your personal feelings, sometimes coming off as moody or arbitrary to those around you. However, with time, you realized your sensitivity for others, and with awareness and compassion, you found your tune in the world of feeling, even opening yourself up to vulnerability and finding yourself occasionally offended.
You were friendly, outgoing, and loving in your childhood and wished to please those around you, especially authority figures such as your parents. You showed your vulnerability and were sensitive to the needs of others, often assuming responsibility for others. Anger may have often led to self-blame, and breaking the rules may have led to shame and guilt. You primarily wished to keep everyone happy, and when not bearing the weight of responsibility, you relished the joy of being alive.
Going into adolescence, you turned to the development of intuition, looking inward as you expanded your sense of imagination. This phase was marked by less concern for fostering harmony and a greater interest in internal exploration, sharing plans and goals with others. Though you may have seen yourself as forgetful and impractical, those peers who were most grounded could have been surprised by your down-to-earthiness. You may have focused less on facts, however, and more on impressions of the full picture and the ambience associated with it.
You now had a desire to shift away from the future and toward the present. New interests began to arise—sports, handcrafts, musical instruments—and you shared them with others. You found yourself now valuing tidiness, punctuality, and accuracy, which had once been impossible for you to manage. In the long run, however, you maintained your preference for daydreaming.
Around thirty-five years into your life, you found yourself in touch with a shadow side that saw it necessary for you to assert yourself, which you may have had before but without the same sense of urgency. You became able to refuse others’ demands more easily, and you turned toward the world of logic: rationality and reasonability became of more importance. Though you may have been viewed less benevolently as a result of this shift, you began to experience a new peace around the idea that you could choose to be generous and had greater control over life.
As a child, you were most engaged in developing sensing. You wanted to collect knowledge on everything and share it with others, needing lots of stimulation and easily becoming bored. You wanted to know more about the people and things around you, so you may have been into collecting and classifying things related to engaging activities, such as sports or gardening.
Around the age of twelve, you started to look inward, becoming more attuned to logic in your decision making. You developed a frank attitude, and people may have been put off by your directness and honesty. You may have also found yourself subtly taking on a direct role as a manager, driven by your standards of reason; people may not have always seen where you were coming from.
Things then begin to shift around the age of twenty. It may have been difficult for you to enter this stage of development, as the style you had become used to in your adolescence was directly opposed to a more sensitive side of you that you were now developing. Compassion, humility, and vulnerability finally emerged, and you found yourself dealing with others more tactfully. You also began to express your own feelings outward more easily—you became in touch with embarrassment and could be moved to tears.
Having spent most of your life living in the immediate sensory world, you begin to look to your shadow side in your final stage of type development. You think more about the future and let go of your detail-oriented nature—you think less factually and more speculatively. You see yourself become more creative and detach yourself from tedious worries. Daydreaming comes more naturally, and you find that you are at your most inspired when you are alone.
As a child, you were most engaged in developing sensing. You wanted to collect knowledge on everything and share it with others, needing lots of stimulation and easily becoming bored. You wanted to know more about the people and things around you, so you may have been into collecting and classifying things related to engaging activities, such as sports or gardening.
Around the age of twelve, you began looking inward and developed your feelings and compassion for others. You were not, however, outwardly expressive of these feelings. You showed great care and sensitivity to the pain and suffering of others, and found it difficult putting your foot down; you were tolerant and appreciative. You would often sacrifice your true wishes in order to please others, even if they were not aware of you doing so. Conversely, people may have brushed your needs aside, not showing the same regard for your feelings as you did for theirs.
Eventually, you realized you had to look out for yourself. You developed a sense of aggressiveness as you began to look out for your own needs and showed assertion when people tried pushing you around. Though you may have felt guilt when people used to your gentleness saw this new side in your behavior, you began to enjoy being in charge of yourself and free to be who you pleased yourself to be.
Having spent most of your life living in the immediate sensory world, you begin to look to your shadow side in your final stage of type development. You think more about the future and let go of your detail-oriented nature—you think less factually and more speculatively. You see yourself become more creative and detach yourself from tedious worries. Daydreaming comes more naturally, and you find that you are at your most inspired when you are alone.
As a young child, you were absorbed in the world of imagination, stimulated primarily by the social world rather than in solicitude. If you were an only child, you might have had an imaginary friend. With other children, you were often the one who stimulated them with new and exciting activities, being easily bored by routine, whether in play, work, or study. You may have been urged to be brought back down to the real world, and you might have been scolded for your disorderliness. Even as you did today’s tasks, you would have your mind what it would bring tomorrow.
Around the age of twelve, you began looking inward, becoming more reflective and turning to the world of logic. You often had difficulty describing the schemes you’d come up with to understand the world around you to others, and you spoke frankly and honestly to others—even if they may have been expecting more sensitivity on your part. You developed a strong sense of fairness and used it to guide your decision making.
You now began to find yourself becoming more uncharacteristically subjective, paying closer attention to the sentiments of others and becoming more involved with visceral feelings of your own, acting more on emotional whims rather than you had before with consistent, patterned behavior. Reactions you would have considered too sentimental now took the forefront, and people may have began to notice more warmth, tenderness, and compassion in communicating with you.
Your life up to now had been so indulgent in the possible that your relationship with the sensory world had gone underdeveloped, failing to notice the details of the world around you. You now, however, began to take an interest in this world, picking up sensing-related hobbies such as sewing, crafting, or learning to play a musical instrument. Punctuality and neatness became important to you as you also began to reconcile with solitude, a departure from your tendency toward excitement and activity.
As a young child, you were absorbed in the world of imagination, stimulated primarily by the social world rather than in solicitude. If you were an only child, you might have had an imaginary friend. With other children, you were often the one who stimulated them with new and exciting activities, being easily bored by routine, whether in play, work, or study. You may have been urged to be brought back down to the real world, and you might have been scolded for your disorderliness. Even as you did today’s tasks, you would have your mind what it would bring tomorrow.
In adolescence, you began to cultivate your feelings. As you developed your sense of compassion, your interests turned toward being of service to others and may have joined efforts to help the disadvantaged and underprivileged. You might have found yourself more committed to your traditions. Career-wise, you might have considered looking toward service-oriented opportunities.
Around twenty, you began wondering whether you had previously began to shape yourself on the basis of strongly held convictions rather than of being of service to others, as it had been a turn inward—in contrast, you began to develop a social attitude that brought you security in your own convictions and a sense of assertiveness. Though this behavior may have showed itself awkwardly, you believed the answer was to show more assertiveness rather than return to submissiveness.
Your life up to now had been so indulgent in the possible that your relationship with the sensory world had gone underdeveloped, failing to notice the details of the world around you. You now, however, began to take an interest in this world, picking up sensing-related hobbies such as sewing, crafting, or learning to play a musical instrument. Punctuality and neatness became important to you as you also began to reconcile with solitude, a departure from your tendency toward excitement and activity.
Did you relate to Grant’s descriptions at all? How do you resonate with the development it highlights? While it’s your own judgment to make, you should note that I never directly test for Grant’s archetype criteria, and that the overlap between the ninety-six questions and Grant’s work is merely coincidental. Remember, the test uses his model—not necessarily his ideas.
Just like William Harold Grant claims, there is a lot of Jungian influence in his descriptions. It doesn’t translate exactly, and you can etch out more MBTI influence in the particulars of his descriptions, but Grant, like Jung, does not really differentiate eight different functions as much as he does four functions with two different attitudes (introverted and extraverted).
Though he himself made more of a connection to Jung’s work, Grant’s model ended up becoming popular in the MBTI world, possibly with the help of Alan Brownsword, the author of It Takes All Types!, who used Grant’s XYXY model and may have played a hand in popularizing it in the Myers-Briggs world.
In the olden days, when Isabel Myers was more focused on redesigning Jung’s work, she had been invested in type dynamics, which sought to link her MBTI to the Jungian psychological types. It didn’t really work, though, and she left this idea unfinished (other psychologists, however, became deeply committed to trying to link both MBTI and Jung, and it’s probably why type dynamics survives today in what we call the cognitive functions on the Internet). There weren’t any real type «descriptions,» and Myers had barely worked out its logistics. And because this didn’t stop anyone from canonizing type dynamics, I decided to pay it homage by including what we’ll call the Myers function type. You can see what you got for it here:
| myers function type | xxxx |
What’s it mean? Isabel Myers paid close attention to two particular ideas: E/I and J/P. N and S are perceiving functions, while T and F are judging functions—similar to Jung. However, what she did differently was call attention to what gets extraverted. If you extravert perception, you are P; if you extravert judgment, you are J.
It shouldn’t surprise you then that your Myers function J/P is decided by adding together your two extraverted judging and perceiving functions, and then comparing them.
E/I is then calculated by adding up your introverted functions and your extraverted functions and comparing them; the same goes with F/T and N/S. That’s your Myers function type—mystery solved!
We don’t really have descriptions for Myers functions, unfortunately. There are loose threads here and there, but as far as I know, nothing concrete really exists out there. So what about the last thing? What actually is a Myers-Briggs type?
You’re better off, honestly, not trying to make ends of that on the Internet. With type dynamics muddying the waters, it seems like the big feud is over where MBTI stands between «the letters» and the «functions.» If you haven’t yet read Full context: the cognitive functions, this would be the time to do it.
To put it short: this test asks you questions related to what people on the Internet call «the cognitive functions,» but because this concept is a meaningless, amorphous blob of various subconcepts derived from antiquated sources trying to tie Jung and MBTI together, the most true way of assessing your type in a way that is logically consistent, streamlined, and largely universal would be to use the Myers-Briggs Type Indicator, a typology system that sorts personality to 16 general categories with some room between them.
Right now, I will assume you take MBTI to be «that 16 personality thingy with a type code that means something.» It’s not to dumb down your understanding, but to simplify how I’ll talk about it.
Let’s first recall your Myers-Briggs type:
| myers-briggs type | E N F J |
If you’re having trouble seeing it, that’s ENFJ. The first question you’d probably ask is… what’s with the fading? Isn’t MBTI just a four-letter code?
The answer is complicated. Modern interpretations of MBTI (see Step II) have been inspired by the fluidity of scalar psychometric models such as the Big Five, and the four dichotomies MBTI has historically used are now encouraged to be seen as more fluid—you don’t have to be a total «introvert» or «extravert» but can be somewhere in between (see «ambivert»). All axes can be interpreted in this way: you can be in between N/S, T/F, or J/P.
You might then guess that the opacity of the four different letters indicates strength along these axes. For example, having a very faint «T» would mean that you are somewhere in the middle between F and T but lean very slightly toward T. This would indeed be a demonstration of the scalar model utilized in Step II MBTI.
However, this is not what the letter opacity indicates on this test. It actually instead describes certainty of a letter preference. This test does not assume that there are four letter axes or dichotomies, but rather eight distinct sub-archetypes that can be independently measured and compared against one another.
Let’s take, for example, your N preference. I’m guessing it looks pretty faint on your screen. That’s because it might as well not be there: the test couldn’t really make out a pattern for you that erred on either side—N or S—and it might as well be either. Maybe you don’t really fit too well into how MBTI created the N/S axis. Maybe you find yourself going back and forth between them. That’s up to you to understand, though, and this test can only tell you that this degree of certainty exists. It can’t tell you much more than that.
Let’s apply it to all four of your preferences, then, shall we?
| E | Extremely uncertain. |
| N | Absolutely volatile! |
| F | Absolutely volatile! |
| J | Very uncertain. |
The theory behind this is complicated. There isn’t exactly a direct precedent for me to do things this way, and explaining the reasoning behind it would mean getting super verbose and technical. But in the most simple way I can put it:
The «dichotomies» MBTI uses are all, to some extent, false, as they overlap or are set apart in ways that make them imperfect opposites. Some dichotomies are worse in this regard than others—T/F is a poorly constrained axis, while I/E is pretty well-constrained. But because MBTI is like this, I can’t rely on the scalar model, which confines you to a way of thinking that may not even apply to you. Instead, I just compare the values of the eight separate letters and put it all together at the end.
Something that the MBTI is not associated with is ability. While ability may affect your four preferences, it wasn’t ever supposed to be an intrinsic property belonging to any of the eight preferences. The cognitive functions in type dynamics, however, do often deal with ability, and this test assumes that different abilities are associated with different functions. But forgetting about the Functions for a moment—what if we just took a look at how you rated yourself for each of the abilities mentioned in the test?
The results are in, and they say…
It isn’t particularly useful to just list your responses as-is, though. We need to instead compare you to other people who have taken the test—whether they respond similarly to you may be a better indicator of how distinguished your responses are.
| # | mean | sd | z-score |
Unfinished. Next probable update: 2020/10/10.
You consent to me using your answers for data analysis
frequently asked questions (updated 2018/11/14)
What is the Grant-Brownsword function model?
In 1983, William Harold Grant, along with Magdala Thompson and Thomas E. Clarke, authored a book relating Jungian personality types to the Gospel by correlating Biblical themes to Jung’s functions. Titled From Image to Likeness: A Jungian Path in the Gospel Journey, the main purpose of this book was to encourage the reader to understand the importance and the meaning of «God’s image» and how to evoke it within you on a journey from image to likeness. But this work contained a tidbit that would come to shape typology today: a new psychological model.
Grant dubbed it the third major model, highlighting how it «views Jung’s functions and attitudes on the basis of a developmental typology.» This model was based on their observations from several hundred people involved in their retreats and workshops (frequently referenced as «R/W» throughout their preface) along with thousands of students from two universities; it specifically referred to four stages of development from the ages of six to fifty.
Grant understood his model was a deviation from conventional interpretations of Jung’s work and did not expect to «find support within the Jungian tradition». In his own words, «admittedly, it needed further testing.» Grant included his model in the book in order to encourage people to view their personalities not statically but dynamically.
Alan W. Brownsword would end up writing It Takes All Types! in 1987, utilizing Grant’s model «in accordance with» Myers-Briggs types. This is not actually the case; Brownsword seemed to share an incorrect belief with many personality theorists from his time about the nature of «Type,» and this caused him to commit categorical errors when interpreting Jungian theory and Myers’ work with the MBTI. When talking about the E/I orientations of the tertiary and inferior functions, Brownsword only says that «not all of students of Jung seem to agree with [the tertiary function sharing the same direction as the dominant function]» and dismisses the more accepted**** interpretation of Jung’s work claiming that the «tertiary function» would be introverted with a claim that «it just doesn’t seem to work that way.» Consider Brownsword’s model to be an awkward amalgamation of Jungian psychological types, Myers-Briggs theory, W.H. Grant’s third model, and his own interpretation of what’s really going on.
The function stack today originated with Grant and Brownsword, but has been popularized by figures like Linda Berens and Dario Nardi. There is a lot of history behind how this had come about, which you can read more about here: Full context: the cognitive functions.
**** the idea of having an «alternating stack» where the functions would be ordered IEIE or EIEI is fundamentally against how Jung described the function attitudes. Jung never made a stack template, but if he did, the directions would only ever work with two exclusive directions (i.e. IEEE, EEII, and IIIE would be acceptable, but not IEEI). Brownsword talked about how the «tertiary» function would be introverted according to Jungian analysts but he really meant that a function in that position would be introverted in their (correct) analysis of Jung’s work; «tertiary» functions are not a thing in Jung’s Psychological Types.
I don’t understand—how is all of this calculated?
I used to give the exact formulas for the calculations before, but I like the idea of the numbers themselves being publicly ambiguous. But I really don’t have a reason to be obscure about how the formulas are set up:
The Grant-Brownsword algorithm calculates a score for all sixteen possible types by adding up weighted totals for the dominant, auxiliary, and—very weakly—tertiary functions, then subtracting weighted inferior function totals in the final add-up. It would look something like this: a(dominant)+b(auxiliary)+c(tertiary)-d(inferior) = type_score
The axis-based algorithm will assume that there are no inferior functions in your stack, and that functions on opposite ends create axes that you would either prefer or not prefer, so in other words, your scores for Ne/Si are compared to Ni/Se, and the same thing goes for Se/Ni and Ni/Se. The algorithm then tries to figure out which one of those four «valued» functions you prefer should be dominant, and voila! You get your type.
Why isn’t my Myers-Briggs result the same as my function result?
Because they aren’t the same thing. Your Myers-Briggs result is based on the letter values assigned to each question (for example, agreeing with question #42 most significantly increases your E, N, and P scores even though it would give you 2 points for «Se») and your two other results are based only on the raw function algorithms. They are scored differently and mean different things.
How accurate is the test?
That really depends on what «accurate» means to you. My test is only meant to take your answers, run the formulas, and give you a result based on those formulas; this test would be 100% accurate solely with regards to that. Whether or not your result will be an accurate reflection of your «function type» or your Myers-Briggs type is up for you to decide.
But I should stress an important detail: I’ve received a little bit over 10k responses to date, and I’ve been able to compare purported Myers-Briggs types on this test with the types received on «raw» Form Q. Unfortunately, crossover data is scarce, and only about a tiny percentage of the slightly-less-than-10k responders (you can take tests more than once) have taken both the raw Form Q test and the function test. There is a slight NP/SJ bias in the margins, so I would seriously consider J for you if you scored «strong/clear N» and «undifferentiated» on J/P, or S if you scored «undifferentiated» and «strong/clear P,» etc. But my big problem is that I can’t offset the results with numerical addends or subtrahends because the gaps between these results are often relative, not absolute.
For now, I would just recommend interpreting your results with this in mind, but I may add a permalink for your results for inquiry purposes soon.
But your test is totally inaccurate! The questions suck, and I know I’m definitely not the type I got.
It’s really anyone’s guess what an «accurate» interpretation of the functions is, because such a thing doesn’t actually exist. I know, crazy. Maybe you think those definitions are absolutely wrong, maybe somebody else thinks those definitions are absolutely correct. There isn’t a consensus on what function theory is, and there frankly never will be.
But if you do think you have all the answers, I added an option for people to choose an accuracy score for the test—not of their results since they haven’t seen them—but for the questions in «assessing» your functions. It’s a little dumb because no one actually knows which question scores for which function before they get their results, but it would be a little wonky adding post-result data to already-submitted results. I’m sure there’s a way, and I’ll have to experiment with what works best.
You consent to me using your answers for data analysis
frequently asked questions (updated 2018/11/14)
What is the Grant-Brownsword function model?
In 1983, William Harold Grant, along with Magdala Thompson and Thomas E. Clarke, authored a book relating Jungian personality types to the Gospel by correlating Biblical themes to Jung’s functions. Titled From Image to Likeness: A Jungian Path in the Gospel Journey, the main purpose of this book was to encourage the reader to understand the importance and the meaning of «God’s image» and how to evoke it within you on a journey from image to likeness. But this work contained a tidbit that would come to shape typology today: a new psychological model.
Grant dubbed it the third major model, highlighting how it «views Jung’s functions and attitudes on the basis of a developmental typology.» This model was based on their observations from several hundred people involved in their retreats and workshops (frequently referenced as «R/W» throughout their preface) along with thousands of students from two universities; it specifically referred to four stages of development from the ages of six to fifty.
Grant understood his model was a deviation from conventional interpretations of Jung’s work and did not expect to «find support within the Jungian tradition». In his own words, «admittedly, it needed further testing.» Grant included his model in the book in order to encourage people to view their personalities not statically but dynamically.
Alan W. Brownsword would end up writing It Takes All Types! in 1987, utilizing Grant’s model «in accordance with» Myers-Briggs types. This is not actually the case; Brownsword seemed to share an incorrect belief with many personality theorists from his time about the nature of «Type,» and this caused him to commit categorical errors when interpreting Jungian theory and Myers’ work with the MBTI. When talking about the E/I orientations of the tertiary and inferior functions, Brownsword only says that «not all of students of Jung seem to agree with [the tertiary function sharing the same direction as the dominant function]» and dismisses the more accepted**** interpretation of Jung’s work claiming that the «tertiary function» would be introverted with a claim that «it just doesn’t seem to work that way.» Consider Brownsword’s model to be an awkward amalgamation of Jungian psychological types, Myers-Briggs theory, W.H. Grant’s third model, and his own interpretation of what’s really going on.
The function stack today originated with Grant and Brownsword, but has been popularized by figures like Linda Berens and Dario Nardi. There is a lot of history behind how this had come about, which you can read more about here: Full context: the cognitive functions.
**** the idea of having an «alternating stack» where the functions would be ordered IEIE or EIEI is fundamentally against how Jung described the function attitudes. Jung never made a stack template, but if he did, the directions would only ever work with two exclusive directions (i.e. IEEE, EEII, and IIIE would be acceptable, but not IEEI). Brownsword talked about how the «tertiary» function would be introverted according to Jungian analysts but he really meant that a function in that position would be introverted in their (correct) analysis of Jung’s work; «tertiary» functions are not a thing in Jung’s Psychological Types.
I don’t understand—how is all of this calculated?
I used to give the exact formulas for the calculations before, but I like the idea of the numbers themselves being publicly ambiguous. But I really don’t have a reason to be obscure about how the formulas are set up:
The Grant-Brownsword algorithm calculates a score for all sixteen possible types by adding up weighted totals for the dominant, auxiliary, and—very weakly—tertiary functions, then subtracting weighted inferior function totals in the final add-up. It would look something like this: a(dominant)+b(auxiliary)+c(tertiary)-d(inferior) = type_score
The axis-based algorithm will assume that there are no inferior functions in your stack, and that functions on opposite ends create axes that you would either prefer or not prefer, so in other words, your scores for Ne/Si are compared to Ni/Se, and the same thing goes for Se/Ni and Ni/Se. The algorithm then tries to figure out which one of those four «valued» functions you prefer should be dominant, and voila! You get your type.
Why isn’t my Myers-Briggs result the same as my function result?
Because they aren’t the same thing. Your Myers-Briggs result is based on the letter values assigned to each question (for example, agreeing with question #42 most significantly increases your E, N, and P scores even though it would give you 2 points for «Se») and your two other results are based only on the raw function algorithms. They are scored differently and mean different things.
How accurate is the test?
That really depends on what «accurate» means to you. My test is only meant to take your answers, run the formulas, and give you a result based on those formulas; this test would be 100% accurate solely with regards to that. Whether or not your result will be an accurate reflection of your «function type» or your Myers-Briggs type is up for you to decide.
But I should stress an important detail: I’ve received a little bit over 10k responses to date, and I’ve been able to compare purported Myers-Briggs types on this test with the types received on «raw» Form Q. Unfortunately, crossover data is scarce, and only about a tiny percentage of the slightly-less-than-10k responders (you can take tests more than once) have taken both the raw Form Q test and the function test. There is a slight NP/SJ bias in the margins, so I would seriously consider J for you if you scored «strong/clear N» and «undifferentiated» on J/P, or S if you scored «undifferentiated» and «strong/clear P,» etc. But my big problem is that I can’t offset the results with numerical addends or subtrahends because the gaps between these results are often relative, not absolute.
For now, I would just recommend interpreting your results with this in mind, but I may add a permalink for your results for inquiry purposes soon.
But your test is totally inaccurate! The questions suck, and I know I’m definitely not the type I got.
It’s really anyone’s guess what an «accurate» interpretation of the functions is, because such a thing doesn’t actually exist. I know, crazy. Maybe you think those definitions are absolutely wrong, maybe somebody else thinks those definitions are absolutely correct. There isn’t a consensus on what function theory is, and there frankly never will be.
But if you do think you have all the answers, I added an option for people to choose an accuracy score for the test—not of their results since they haven’t seen them—but for the questions in «assessing» your functions. It’s a little dumb because no one actually knows which question scores for which function before they get their results, but it would be a little wonky adding post-result data to already-submitted results. I’m sure there’s a way, and I’ll have to experiment with what works best.
You consent to me using your answers for data analysis
frequently asked questions (updated 2018/11/14)
What is the Grant-Brownsword function model?
In 1983, William Harold Grant, along with Magdala Thompson and Thomas E. Clarke, authored a book relating Jungian personality types to the Gospel by correlating Biblical themes to Jung’s functions. Titled From Image to Likeness: A Jungian Path in the Gospel Journey, the main purpose of this book was to encourage the reader to understand the importance and the meaning of «God’s image» and how to evoke it within you on a journey from image to likeness. But this work contained a tidbit that would come to shape typology today: a new psychological model.
Grant dubbed it the third major model, highlighting how it «views Jung’s functions and attitudes on the basis of a developmental typology.» This model was based on their observations from several hundred people involved in their retreats and workshops (frequently referenced as «R/W» throughout their preface) along with thousands of students from two universities; it specifically referred to four stages of development from the ages of six to fifty.
Grant understood his model was a deviation from conventional interpretations of Jung’s work and did not expect to «find support within the Jungian tradition». In his own words, «admittedly, it needed further testing.» Grant included his model in the book in order to encourage people to view their personalities not statically but dynamically.
Alan W. Brownsword would end up writing It Takes All Types! in 1987, utilizing Grant’s model «in accordance with» Myers-Briggs types. This is not actually the case; Brownsword seemed to share an incorrect belief with many personality theorists from his time about the nature of «Type,» and this caused him to commit categorical errors when interpreting Jungian theory and Myers’ work with the MBTI. When talking about the E/I orientations of the tertiary and inferior functions, Brownsword only says that «not all of students of Jung seem to agree with [the tertiary function sharing the same direction as the dominant function]» and dismisses the more accepted**** interpretation of Jung’s work claiming that the «tertiary function» would be introverted with a claim that «it just doesn’t seem to work that way.» Consider Brownsword’s model to be an awkward amalgamation of Jungian psychological types, Myers-Briggs theory, W.H. Grant’s third model, and his own interpretation of what’s really going on.
The function stack today originated with Grant and Brownsword, but has been popularized by figures like Linda Berens and Dario Nardi. There is a lot of history behind how this had come about, which you can read more about here: Full context: the cognitive functions.
**** the idea of having an «alternating stack» where the functions would be ordered IEIE or EIEI is fundamentally against how Jung described the function attitudes. Jung never made a stack template, but if he did, the directions would only ever work with two exclusive directions (i.e. IEEE, EEII, and IIIE would be acceptable, but not IEEI). Brownsword talked about how the «tertiary» function would be introverted according to Jungian analysts but he really meant that a function in that position would be introverted in their (correct) analysis of Jung’s work; «tertiary» functions are not a thing in Jung’s Psychological Types.
I don’t understand—how is all of this calculated?
I used to give the exact formulas for the calculations before, but I like the idea of the numbers themselves being publicly ambiguous. But I really don’t have a reason to be obscure about how the formulas are set up:
The Grant-Brownsword algorithm calculates a score for all sixteen possible types by adding up weighted totals for the dominant, auxiliary, and—very weakly—tertiary functions, then subtracting weighted inferior function totals in the final add-up. It would look something like this: a(dominant)+b(auxiliary)+c(tertiary)-d(inferior) = type_score
The axis-based algorithm will assume that there are no inferior functions in your stack, and that functions on opposite ends create axes that you would either prefer or not prefer, so in other words, your scores for Ne/Si are compared to Ni/Se, and the same thing goes for Se/Ni and Ni/Se. The algorithm then tries to figure out which one of those four «valued» functions you prefer should be dominant, and voila! You get your type.
Why isn’t my Myers-Briggs result the same as my function result?
Because they aren’t the same thing. Your Myers-Briggs result is based on the letter values assigned to each question (for example, agreeing with question #42 most significantly increases your E, N, and P scores even though it would give you 2 points for «Se») and your two other results are based only on the raw function algorithms. They are scored differently and mean different things.
How accurate is the test?
That really depends on what «accurate» means to you. My test is only meant to take your answers, run the formulas, and give you a result based on those formulas; this test would be 100% accurate solely with regards to that. Whether or not your result will be an accurate reflection of your «function type» or your Myers-Briggs type is up for you to decide.
But I should stress an important detail: I’ve received a little bit over 10k responses to date, and I’ve been able to compare purported Myers-Briggs types on this test with the types received on «raw» Form Q. Unfortunately, crossover data is scarce, and only about a tiny percentage of the slightly-less-than-10k responders (you can take tests more than once) have taken both the raw Form Q test and the function test. There is a slight NP/SJ bias in the margins, so I would seriously consider J for you if you scored «strong/clear N» and «undifferentiated» on J/P, or S if you scored «undifferentiated» and «strong/clear P,» etc. But my big problem is that I can’t offset the results with numerical addends or subtrahends because the gaps between these results are often relative, not absolute.
For now, I would just recommend interpreting your results with this in mind, but I may add a permalink for your results for inquiry purposes soon.
But your test is totally inaccurate! The questions suck, and I know I’m definitely not the type I got.
It’s really anyone’s guess what an «accurate» interpretation of the functions is, because such a thing doesn’t actually exist. I know, crazy. Maybe you think those definitions are absolutely wrong, maybe somebody else thinks those definitions are absolutely correct. There isn’t a consensus on what function theory is, and there frankly never will be.
But if you do think you have all the answers, I added an option for people to choose an accuracy score for the test—not of their results since they haven’t seen them—but for the questions in «assessing» your functions. It’s a little dumb because no one actually knows which question scores for which function before they get their results, but it would be a little wonky adding post-result data to already-submitted results. I’m sure there’s a way, and I’ll have to experiment with what works best.
Take the Enneagram
Personality Test
Instructions for taking the Enneagram test
Enneagram tests taken
What others are saying about the Enneagram test
Enneagram
Test FAQs
Want to know more about the Enneagram test? These are the questions we get asked most often.
What is the Enneagram personality test?
This personality test is based on the Enneagram personality typology. The Enneagram is an amazing tool to help people understand themselves and others better. The test will give you a first hint towards which of the 9 Enneagram personalities fits you best.
How long is the Enneagram test? How many questions do I have to answer?
We’ve built the first fully dynamic Enneagram test that significantly reduces the number of questions while still being as accurate as possible. Depending on your answers, you will answer between 45 to 93 questions. Which is, on average, between 30%-50% faster than other comparable Enneagram tests you find online. It will take you between 9 and 15 minutes, but for most of our users, our data say it takes them less than 12 minutes.
But in case your fear is that it’s not long enough: Rest assured, there is no time limit. You can take as long as you want.
What do I get after I take the Enneagram test?
Here’s what you’ll get:
On a less material level, you get to discover the unique perspective of how you see the world, and begin to better understand why you think, feel and behave the way you do.
All of this for free. Where’s the catch?
No catch at all. Our dream is that every person gets the chance to discover who they really are and become their healthiest self. That’s why we offer most of our material for free – also this Enneagram test. However you can support us by purchasing your Full Report.
How accurate is your Enneagram test?
Currently, our personality test is about 85-90% accurate. We have researched it together with top experts in the field who have used and pioneered the Enneagram for decades. To ensure it is accurate and reliable we have put the best available research and technology together in an adaptive and flexible testing system.
But it doesn’t stop there. We are using the data of our test to constantly improve and refine it, meaning it just gets more accurate the more people take it.
There’s just one thing, and we want to be transparent about this: Any Enneagram test, no matter what it promises you, can never reach 100% accuracy. Why? Because it is trying to assess your inner world. And that depends on how well you know yourself and how open and honest you are about it. No test in the world can measure for that.
Is this an official Enneagram test?
There is no such thing as an „official“ Enneagram test. There are a lot of tests out there, some of which are good, some not so much. A good test will measure which of the 9 Enneagram types is your dominant type with an accuracy of 80%-90%, which is what we can say with confidence our test does very consistently – while being short and quick.
What’s my Enneagram type? How can I be sure?
Since no Enneagram test can be 100% accurate, you should always take your result with a grain of salt. Consider it more as a hint towards your dominant type than a definitive assessment.
On top of that, we realized that two things help a lot of people to get a more accurate and consistent test result: First, when taking the test, think about your life in broad terms. Don’t confine yourself to your work environment or your family life, but be as general as possible. Secondly, think of yourself how you have been behaving in your early twenties, rather than how you behave now.
One last thing: The whole topic of discovering your Enneagram type is much more interesting and complex than just one test. That’s why we have put all resources and tips into a comprehensive guide on how to be sure.
The Common Data Analytics Interview Questions You’ll Be Asked
Job interviews! They’re not everyone’s favorite pastime, and we’ve all experienced hard interviews where you’re caught off guard by a difficult question that you struggle to answer. Interviews are even harder if you’re interviewing for a role in a field that’s fairly new to you, and the probability of being stuck with a difficult question is even higher.
Let’s say you’ve expressed an interest in pursuing a career in data analytics, you’ve taken a course and are now ready to start applying for jobs. How do you ensure you’re not completely out of your depth going into the interview? What are interviewers likely to want to know about you, and how can you prepare accordingly?
With the help of our resident career advisor Danielle, we’ve produced a list of frequently asked interview questions and tips on how to answer them. We can’t guarantee you’ll bag the job, but we can certainly give you the confidence to walk out of the interview room knowing you’ve given it your best. Let’s start the interview!
Data analyst interview questions and answers
1. Introductory Questions
These questions are designed to ease you into the interview, and will focus on broad topics so the interviewer can get to know more about you.
“Tell me about yourself.”
Danielle says: When an interviewer asks this, what they’re essentially saying is: ‘Can you walk me through your career history, giving one takeaway from each of your experiences in work and education?’. It’s important to bear this in mind when fielding this question, and structure your answer accordingly, so that you share the right kind of information and leave out the bits that aren’t important.
“How would you describe yourself as a data analyst?”
Danielle says: This is your chance to impress them with your passion and drive to work in data analytics. You need to press home your love of data, and explain the reasons why you’re pursuing analytics as a career. Lead the interviewer through your journey to becoming a data analyst and your approach to data analysis.
Demonstrate your awareness as to how and why having a solid understanding of the industry you’re looking to work in enhances your ability to carry out effective analysis. Outline your strengths and where they lie. Are you great at collaborating with teams? Are you a natural at programming languages? Do you love giving presentations on your findings? Explain what tools you’re familiar with, such as Excel, and what programming languages you know.
“What do you already know about the business/product—what value does your skill set add to what we’re doing?”
Danielle says: It’s essential you demonstrate your knowledge of the business and product, because that’s a key part of being a data analyst. The art of analytics lies in your ability to ask great questions, and you’ll only be able to ask such questions with sufficient background knowledge in the field. So demonstrate to the interviewer that you’ve done your research, and how your own analytical skills relate to the field. Perhaps you’ve already worked in the area before in a different capacity; show them how your previous experience relates to your new set of skills!
2. Data analysis questions

“Please share some past data analysis work you’ve done—and tell me about your most recent data analysis project.”
Danielle says: It’s best to use the STAR method when asked a question such as this: Situation, Task, Action, Result. Outline the circumstances surrounding a previous data analysis project, describe what you had to do, how you did it, and the outcome of your work. Don’t worry about being fairly rigid in your approach to this answer—just make sure the interviewer has everything they need to know by the end.
“Tell me about a time when you ran an analysis on the wrong set of data and how did you discover your mistake?”
Danielle says: The most important thing when answering questions regarding a mistake or a weakness, is to acknowledge ownership over what’s happened. Mistakes aren’t important to the interviewer, but your transparency and how you found a solution is. Outline the learning process and how that’s enabled you to work more effectively.
“What was your most difficult data analyst project? What problem did you solve and what did you contribute to make it a success?”
Danielle says: Provide some context for what you’re about to say. Explain the project and the goal of it, going into some detail about your own role in the process. Then explain what aspect of it you found the most difficult. Your solution to overcoming this difficulty is what the interviewer’s looking for.
3. Technical Questions
These questions will touch upon more technical aspects of the role of data analyst. Be prepared to bring up more working examples from your previous roles, and make sure you’ve prepared an answer for what aspects of the role appeal to you. Don’t worry though, these questions aren’t going to dive too deep into your expertise—so don’t worry about being put on the spot!
“What’s your favourite tool for data analysis—your likes, dislikes, and why? What querying languages do you know?”
Danielle says: For this question, It’s important you detail your (hopefully excellent!) Excel skills, which are an integral part of performing data analysis. Prove your Excel credentials, outlining any courses you’ve been on or examples of analysis you’ve performed with the program. Employers will also want to know what querying languages you’re familiar with, whether it be SAS, R, Python or another language. Querying languages are used for larger sets of data, so you’ll need to prove you have a solid foundation in one of these languages. Here’s a top tip: try and find out what querying language the company you’re applying to uses, that might come in handy!
“What do you do to stay up to date with new technologies?”
Danielle says: In data analytics, staying on top of developments in the field usually involves keeping your knowledge of existing libraries and frameworks up to date. So make sure you’re able to bring up some names of libraries when asked. The Kaggle Community is an online resource for data scientists and analysts that contains a huge amount of information on the subject, so why not join the community and expand your knowledge. Name dropping such resources in an interview can sometimes help demonstrate your passion for data analytics!
“What are some of your best practices to ensure that you perform good, accurate, and informative data analysis?”
Danielle says: You’re generally going to be referring to data cleansing checks when answering this question with regard to data analytics. By undertaking such checks, you’re able to ensure results are reliable and accurate. Explaining to your interviewer that an awareness for the kind of results that would be implausible is also a good thing to do. The interviewer might give you a small logic problem and ask you to explain how you’d overcome it. Explaining what you’d do and the necessary investigations you’d undertake if something looks odd will tell the interviewer that you have a good problem solving mindset.
“How do you know you’re analyzing the right data?”
Danielle says: Asking the right questions is essential to being a good data analyst, so every new project must begin with asking the right questions. You need to ensure you’re measuring what needs to be measured, so walk the interviewer through your processes of determining what data needs to be analysed to answer the question.
“Tell me about a time that you had to explain the results of your analysis to stakeholders.”
Danielle says: This is a communications skills question—the interviewer is looking for evidence of your presentation skills. Explain times when you’ve had to present data you’ve worked on. Talk about how you’ve justified the results, and what impact your results had on the project.
4. Wrap-up questions
These questions tend to be hard to answer, but it’s very important to prepare well for them. You need to leave a good lasting impression with the interviewer!
“Tell me about your professional development goals and how you plan to achieve them?”
Danielle says: This is another way of saying ‘where do you see yourself in five years?’ It’s always hard to answer this question! Outline the next set of skills and tools you want to learn, or explain what leadership responsibilities interest you. Differentiate whether you want to go down the subject matter track, or the leadership track. Do you want to have a mentor, or eventually be a mentor yourself? Is there a pivot you want to take in your career? Or do you see yourself growing into the role of data scientist, or specializing more in programming? You’ll impress the interviewer if your future career objectives are clear.
“Do you have any questions?”
Danielle says: It’s a good idea to prepare three to five questions in advance of the interview. If you’re going to be interviewed by several people, then prepare more. You want to avoid having your questions already answered during the interview, so aim to have a surplus. Avoid generic questions such as ‘where do you see the company going?’ and personalize your questions to the interviewer. This is the part of the interview where you get the opportunity to open a dialogue and show the value you can bring to the company, if you haven’t already. Questions such as ‘Who will I be most closely working with?’ and ‘What are the biggest challenges facing the team this year?’ are likely to leave a good impression on your interviewer.
You’ll now have a greater understanding of the kind of questions you’ll be asked in interviews for data analyst positions. If you’re curious about becoming a data analyst, why not take our one-month Intro to Data Analytics course? You’ll come away with a solid grounding in Microsoft Excel, one of the key tools used by data analysts. Not ready to commit to a full course? Try this free, five-day data analytics short course.
If you’d like to read more about working as a data analyst, we suggest you read the following articles:
What You Should Do Now
Get a hands-on introduction to data analytics and carry out your first analysis with our free, self-paced Data Analytics Short Course.
Take part in one of our FREE live online data analytics events with industry experts.
Talk to a program advisor to discuss career change and find out how you could become a qualified data analyst in just 4-7 months—complete with a job guarantee.
This article is part of:
Data Analytics
Tom Taylor is a Welsh copywriter and journalist. He’s worked for a number of tech companies in Berlin and spends his weekends writing about music and food for acclaimed blog Berlin Loves You.
Related Data Analytics Articles
Google Data Analytics Certification vs. CareerFoundry Data Analytics Program: Comparison Guide
What is a Business Systems Analyst?
How to Become a Freelance Data Analyst
What is CareerFoundry?
CareerFoundry is an online school for people looking to switch to a rewarding career in tech. Select a program, get paired with an expert mentor and tutor, and become a job-ready designer, developer, or analyst from scratch, or your money back.
Communicating with Data: A Guide for Data Analysts
In the world of big data, it is a difficult task to try and pare things down to a more digestible format.
The difficulty remains no matter how familiar you are with a dataset, and this process can be confusing for people who do not have much experience with data.
Facilitating and guiding the conversation around how to approach and present data is a major aspect of a data analyst role. To learn more about other skills effective data analysts have, check out our Careers in Data Analytics blog.
Effective communication makes the overall process more efficient. It also helps the customer get a better grasp of how they can leverage their data. Of course, this process can get messy and confusing since client dashboard requirements can often shift multiple times.
By keeping a few components in mind, communication can become more focused and efficient, regardless of which data visualization platform you are using.
These components are:
If you are a data analyst that is working with clients, this guide will help you to ensure that you effectively communicate with your audience.
Data Evaluation and Access:
Although it might seem obvious, the first place to start is by familiarizing yourself with the data and where it lives.
Ensure that you have the logins and access to relevant data sources and the necessary licenses to create and publish your visualizations.
Once you have access, it is a good idea to dive into the data and understand the structure of the tables. Things like how large the datasets are, data granularity, what fields and measures are in each table, and how the tables are related to each other are crucial pieces of the puzzle. Having this knowledge from the outset also allows for better client expectation management.
Take a company that sells furniture and classes its products into categories such as Tables and Chairs. These categories are then further broken down into Subcategories such as Office, Home Office, and Hotel. A member of the Sales team might wish to see data broken down by month and by Subcategories. On the surface, this might be a simple task.
Delving into the data might reveal that one table contains the Sales by Date and Category, and another table contains Sales by Subcategories with no clear way to join these two tables. Bringing this issue to light at the very beginning can expedite the data team’s ability to find a solution and is a chance to keep the client involved in their data.
Audience
The next step in gathering dashboard requirements is understanding your audience. Gathering information such as their job title and the team they belong to allows for a better sense of their needs. It is also fundamental to have a sense of your user’s analytical maturity as a more mature audience will allow for more complex analysis.
Is the report being used by a C-suite executive with a lot of demands on their time? If so, it is best if the report focuses on KPIs and provides a quick overview of how key aspects of the business are performing. Or is the audience a store manager that needs to delve into details and see where potential problem areas might be? If so, a dashboard might require a breakdown of each KPI to highlight sources of success and potential problems.
This component requires an understanding of the format and manner in which your audience will interact with the dashboards. Knowing whether the audience intends to download the information and use it as a static image or interact with it directly in Tableau will be principal in determining the functionalities and technical complexity of the reports.
Multiple Iterations
Like most pieces of work, a dashboard often requires more than one iteration. Naturally, this can be frustrating for both parties, especially if there is a time crunch. Establishing that creating a dashboard will require a few iterations keeps the process smooth, especially with multiple stakeholders.
What is Data Analysis? Methods, Techniques & Tools
Table of Contents
What is Data Analysis? Definition & Example
The systematic application of statistical and logical techniques to describe the data scope, modularize the data structure, condense the data representation, illustrate via images, tables, and graphs, and evaluate statistical inclinations, probability data, and derive meaningful conclusions known as Data Analysis. These analytical procedures enable us to induce the underlying inference from data by eliminating the unnecessary chaos created by its rest. Data generation is a continual process; this makes data analysis a continuous, iterative process where the collection and performing data analysis simultaneously. Ensuring data integrity is one of the essential components of data analysis.
There are various examples where data analysis is used, ranging from transportation, risk and fraud detection, customer interaction, city planning healthcare, web search, digital advertisement, and more.
Considering the example of healthcare, as we have noticed recently that with the outbreak of the pandemic, Coronavirus hospitals are facing the challenge of coping up with the pressure in treating as many patients as possible, considering data analysis allows to monitor machine and data usage in such scenarios to achieve efficiency gain.
Before diving any more in-depth, make the following pre-requisites for proper Data Analysis:
Data Analysis Methods
There are two main methods of Data Analysis:
1. Qualitative Analysis
This approach mainly answers questions such as ‘why,’ ‘what’ or ‘how.’ Each of these questions is addressed via quantitative techniques such as questionnaires, attitude scaling, standard outcomes, and more. Such analysis is usually in the form of texts and narratives, which might also include audio and video representations.
2. Quantitative Analysis
Generally, this analysis is measured in terms of numbers. The data here present themselves in terms of measurement scales and extend themselves for more statistical manipulation.
The other techniques include:
3. Text analysis
Text analysis is a technique to analyze texts to extract machine-readable facts. It aims to create structured data out of free and unstructured content. The process consists of slicing and dicing heaps of unstructured, heterogeneous files into easy-to-read, manage and interpret data pieces. It is also known as text mining, text analytics, and information extraction.
The ambiguity of human languages is the biggest challenge of text analysis. For example, humans know that “Red Sox Tames Bull” refers to a baseball match. Still, if this text is fed to a computer without background knowledge, it would generate several linguistically valid interpretations. Sometimes people who are not interested in baseball might have trouble understanding it too.
4. Statistical analysis
Statistics involves data collection, interpretation, and validation. Statistical analysis is the technique of performing several statistical operations to quantify the data and apply statistical analysis. Quantitative data involves descriptive data like surveys and observational data. It is also called a descriptive analysis. It includes various tools to perform statistical data analysis such as SAS (Statistical Analysis System), SPSS (Statistical Package for the Social Sciences), Stat soft, and more.
5. Diagnostic analysis
The diagnostic analysis is a step further to statistical analysis to provide a more in-depth analysis to answer the questions. It is also referred to as root cause analysis as it includes processes like data discovery, mining, and drill down and drill through.
The diagnostic analysis is a step further to statistical analysis to provide a more in-depth analysis to answer the questions. It is also referred to as root cause analysis as it includes processes like data discovery, mining, and drill down and drill through.
6. Predictive analysis
Predictive analysis uses historical data and feds it into the machine learning model to find critical patterns and trends. The model is applied to the current data to predict what would happen next. Many organizations prefer it because of its various advantages like volume and type of data, faster and cheaper computers, easy-to-use software, tighter economic conditions, and a need for competitive differentiation.
7. Prescriptive Analysis
Prescriptive analytics suggests various courses of action and outlines the potential implications that could be reached after predictive analysis. Prescriptive analysis generating automated decisions or recommendations requires specific and unique algorithmic and clear direction from those utilizing the analytical techniques.
Data Analysis Process
Once you set out to collect data for analysis, you are overwhelmed by the amount of information you find to make a clear, concise decision. With so much data to handle, you need to identify relevant data for your analysis to derive an accurate conclusion and make informed decisions. The following simple steps help you identify and sort out your data for analysis.
2. Data Collection
3. Data Processing
4. Data Analysis
5. Infer and Interpret Results
Once you have an inference, always remember it is only a hypothesis. Real-life scenarios may always interfere with your results. In Data Analysis, there are a few related terminologies that identity with different phases of the process.
1. Data Mining
This process involves methods in finding patterns in the data sample.
2. Data Modelling
This refers to how an organization organizes and manages its data.
Data Analysis Techniques
There are different techniques for Data Analysis depending upon the question at hand, the type of data, and the amount of data gathered. Each focuses on taking onto the new data, mining insights, and drilling down into the information to transform facts and figures into decision-making parameters. Accordingly, the different techniques of data analysis can be categorized as follows:
1. Techniques based on Mathematics and Statistics
2. Techniques based on Artificial Intelligence and Machine Learning
3. Techniques based on Visualization and Graphs
Let us now read about a few tools used in data analysis in research.
Data Analysis Tools
There are several data analysis tools available in the market, each with its own set of functions. The selection of tools should always be based on the type of analysis performed and the type of data worked. Here is a list of a few compelling tools for Data Analysis.
1. Excel
It has various compelling features, and with additional plugins installed, it can handle a massive amount of data. So, if you have data that does not come near the significant data margin, Excel can be a versatile tool for data analysis.
Looking to learn Excel? Data Analysis with Excel Pivot Tables course is the highest-rated Excel course on udemy.
2. Tableau
It falls under the BI Tool category, made for the sole purpose of data analysis. The essence of Tableau is the Pivot Table and Pivot Chart and works towards representing data in the most user-friendly way. It additionally has a data cleaning feature along with brilliant analytical functions.
If you want to learn Tableau, udemy’s online course Hands-On Tableau Training For Data Science can be a great asset for you.
3. Power BI
It initially started as a plugin for Excel, but later on, detached from it to develop in one of the most data analytics tools. It comes in three versions: Free, Pro, and Premium. Its PowerPivot and DAX language can implement sophisticated advanced analytics similar to writing Excel formulas.
4. Fine Report
Fine Report comes with a straightforward drag and drops operation, which helps design various reports and build a data decision analysis system. It can directly connect to all kinds of databases, and its format is similar to that of Excel. Additionally, it also provides a variety of dashboard templates and several self-developed visual plug-in libraries.
5. R & Python
These are programming languages that are very powerful and flexible. R is best at statistical analysis, such as normal distribution, cluster classification algorithms, and regression analysis. It also performs individual predictive analyses like customer behavior, spending, items preferred by him based on his browsing history, and more. It also involves concepts of machine learning and artificial intelligence.
6. SAS
It is a programming language for data analytics and data manipulation, which can easily access data from any source. SAS has introduced a broad set of customer profiling products for web, social media, and marketing analytics. It can predict their behaviors, manage, and optimize communications.
Conclusion
This is our complete beginner’s guide on «What is Data Analysis». If you want to learn more about data analysis, Complete Introduction to Business Data Analysis is a great introductory course.
Data Analysis is the key to any business, whether starting up a new venture, making marketing decisions, continuing with a particular course of action, or going for a complete shut-down. The inferences and the statistical probabilities calculated from data analysis help base the most critical decisions by ruling out all human bias. Different analytical tools have overlapping functions and different limitations, but they are also complementary tools. Before choosing a data analytical tool, it is essential to consider the scope of work, infrastructure limitations, economic feasibility, and the final report to be prepared.
What is Data Analysis? Research, Types & Example
Updated July 14, 2022
What is Data Analysis?
Data analysis is defined as a process of cleaning, transforming, and modeling data to discover useful information for business decision-making. The purpose of Data Analysis is to extract useful information from data and taking the decision based upon the data analysis.
A simple example of Data analysis is whenever we take any decision in our day-to-day life is by thinking about what happened last time or what will happen by choosing that particular decision. This is nothing but analyzing our past or future and making decisions based on it. For that, we gather memories of our past or dreams of our future. So that is nothing but data analysis. Now same thing analyst does for business purposes, is called Data Analysis.
In this Data Science Tutorial, you will learn:
Why Data Analysis?
To grow your business even to grow in your life, sometimes all you need to do is Analysis!
If your business is not growing, then you have to look back and acknowledge your mistakes and make a plan again without repeating those mistakes. And even if your business is growing, then you have to look forward to making the business to grow more. All you need to do is analyze your business data and business processes.
Data Analysis Tools

Types of Data Analysis: Techniques and Methods
There are several types of Data Analysis techniques that exist based on business and technology. However, the major Data Analysis methods are:
Text Analysis
Text Analysis is also referred to as Data Mining. It is one of the methods of data analysis to discover a pattern in large data sets using databases or data mining tools. It used to transform raw data into business information. Business Intelligence tools are present in the market which is used to take strategic business decisions. Overall it offers a way to extract and examine data and deriving patterns and finally interpretation of the data.
Statistical Analysis
Statistical Analysis shows “What happen?” by using past data in the form of dashboards. Statistical Analysis includes collection, Analysis, interpretation, presentation, and modeling of data. It analyses a set of data or a sample of data. There are two categories of this type of Analysis – Descriptive Analysis and Inferential Analysis.
Descriptive Analysis
analyses complete data or a sample of summarized numerical data. It shows mean and deviation for continuous data whereas percentage and frequency for categorical data.
Inferential Analysis
analyses sample from complete data. In this type of Analysis, you can find different conclusions from the same data by selecting different samples.
Diagnostic Analysis
Diagnostic Analysis shows “Why did it happen?” by finding the cause from the insight found in Statistical Analysis. This Analysis is useful to identify behavior patterns of data. If a new problem arrives in your business process, then you can look into this Analysis to find similar patterns of that problem. And it may have chances to use similar prescriptions for the new problems.
Predictive Analysis
Predictive Analysis shows “what is likely to happen” by using previous data. The simplest data analysis example is like if last year I bought two dresses based on my savings and if this year my salary is increasing double then I can buy four dresses. But of course it’s not easy like this because you have to think about other circumstances like chances of prices of clothes is increased this year or maybe instead of dresses you want to buy a new bike, or you need to buy a house!
So here, this Analysis makes predictions about future outcomes based on current or past data. Forecasting is just an estimate. Its accuracy is based on how much detailed information you have and how much you dig in it.
Prescriptive Analysis
Prescriptive Analysis combines the insight from all previous Analysis to determine which action to take in a current problem or decision. Most data-driven companies are utilizing Prescriptive Analysis because predictive and descriptive Analysis are not enough to improve data performance. Based on current situations and problems, they analyze the data and make decisions.
Data Analysis Process
The Data Analysis Process is nothing but gathering information by using a proper application or tool which allows you to explore the data and find a pattern in it. Based on that information and data, you can make decisions, or you can get ultimate conclusions.
Data Analysis consists of the following phases:
Data Requirement Gathering
First of all, you have to think about why do you want to do this data analysis? All you need to find out the purpose or aim of doing the Analysis of data. You have to decide which type of data analysis you wanted to do! In this phase, you have to decide what to analyze and how to measure it, you have to understand why you are investigating and what measures you have to use to do this Analysis.
Data Collection
After requirement gathering, you will get a clear idea about what things you have to measure and what should be your findings. Now it’s time to collect your data based on requirements. Once you collect your data, remember that the collected data must be processed or organized for Analysis. As you collected data from various sources, you must have to keep a log with a collection date and source of the data.
Data Cleaning
Now whatever data is collected may not be useful or irrelevant to your aim of Analysis, hence it should be cleaned. The data which is collected may contain duplicate records, white spaces or errors. The data should be cleaned and error free. This phase must be done before Analysis because based on data cleaning, your output of Analysis will be closer to your expected outcome.
Data Analysis
Once the data is collected, cleaned, and processed, it is ready for Analysis. As you manipulate data, you may find you have the exact information you need, or you might need to collect more data. During this phase, you can use data analysis tools and software which will help you to understand, interpret, and derive conclusions based on the requirements.
Data Interpretation
After analyzing your data, it’s finally time to interpret your results. You can choose the way to express or communicate your data analysis either you can use simply in words or maybe a table or chart. Then use the results of your data analysis process to decide your best course of action.
Data Visualization
Data visualization is very common in your day to day life; they often appear in the form of charts and graphs. In other words, data shown graphically so that it will be easier for the human brain to understand and process it. Data visualization often used to discover unknown facts and trends. By observing relationships and comparing datasets, you can find a way to find out meaningful information.
Twelve Million Phones, One Dataset, Zero Privacy
One Nation, TRACKED
Twelve Million Phones, One Dataset, Zero Privacy
By Stuart A. Thompson and Charlie Warzel Dec. 19, 2019
Every minute of every day, everywhere on the planet, dozens of companies — largely unregulated, little scrutinized — are logging the movements of tens of millions of people with mobile phones and storing the information in gigantic data files. The Times Privacy Project obtained one such file, by far the largest and most sensitive ever to be reviewed by journalists. It holds more than 50 billion location pings from the phones of more than 12 million Americans as they moved through several major cities, including Washington, New York, San Francisco and Los Angeles.
Each piece of information in this file represents the precise location of a single smartphone over a period of several months in 2016 and 2017. The data was provided to Times Opinion by sources who asked to remain anonymous because they were not authorized to share it and could face severe penalties for doing so. The sources of the information said they had grown alarmed about how it might be abused and urgently wanted to inform the public and lawmakers.
[Related: How to Track President Trump — Read more about the national security risks found in the data.]
After spending months sifting through the data, tracking the movements of people across the country and speaking with dozens of data companies, technologists, lawyers and academics who study this field, we feel the same sense of alarm. In the cities that the data file covers, it tracks people from nearly every neighborhood and block, whether they live in mobile homes in Alexandria, Va., or luxury towers in Manhattan.
One search turned up more than a dozen people visiting the Playboy Mansion, some overnight. Without much effort we spotted visitors to the estates of Johnny Depp, Tiger Woods and Arnold Schwarzenegger, connecting the devices’ owners to the residences indefinitely.
If you lived in one of the cities the dataset covers and use apps that share your location — anything from weather apps to local news apps to coupon savers — you could be in there, too.
If you could see the full trove, you might never use your phone the same way again.
The data reviewed by Times Opinion didn’t come from a telecom or giant tech company, nor did it come from a governmental surveillance operation. It originated from a location data company, one of dozens quietly collecting precise movements using software slipped onto mobile phone apps. You’ve probably never heard of most of the companies — and yet to anyone who has access to this data, your life is an open book. They can see the places you go every moment of the day, whom you meet with or spend the night with, where you pray, whether you visit a methadone clinic, a psychiatrist’s office or a massage parlor.
The Times and other news organizations have reported on smartphone tracking in the past. But never with a data set so large. Even still, this file represents just a small slice of what’s collected and sold every day by the location tracking industry — surveillance so omnipresent in our digital lives that it now seems impossible for anyone to avoid.
It doesn’t take much imagination to conjure the powers such always-on surveillance can provide an authoritarian regime like China’s. Within America’s own representative democracy, citizens would surely rise up in outrage if the government attempted to mandate that every person above the age of 12 carry a tracking device that revealed their location 24 hours a day. Yet, in the decade since Apple’s App Store was created, Americans have, app by app, consented to just such a system run by private companies. Now, as the decade ends, tens of millions of Americans, including many children, find themselves carrying spies in their pockets during the day and leaving them beside their beds at night — even though the corporations that control their data are far less accountable than the government would be.
[Related: Where Even the Children Are Being Tracked — We followed every move of people in one city. Then we went to tell them.]
“The seduction of these consumer products is so powerful that it blinds us to the possibility that there is another way to get the benefits of the technology without the invasion of privacy. But there is,” said William Staples, founding director of the Surveillance Studies Research Center at the University of Kansas. “All the companies collecting this location information act as what I have called Tiny Brothers, using a variety of data sponges to engage in everyday surveillance.”
In this and subsequent articles we’ll reveal what we’ve found and why it has so shaken us. We’ll ask you to consider the national security risks the existence of this kind of data creates and the specter of what such precise, always-on human tracking might mean in the hands of corporations and the government. We’ll also look at legal and ethical justifications that companies rely on to collect our precise locations and the deceptive techniques they use to lull us into sharing it.
Today, it’s perfectly legal to collect and sell all this information. In the United States, as in most of the world, no federal law limits what has become a vast and lucrative trade in human tracking. Only internal company policies and the decency of individual employees prevent those with access to the data from, say, stalking an estranged spouse or selling the evening commute of an intelligence officer to a hostile foreign power.
Companies say the data is shared only with vetted partners. As a society, we’re choosing simply to take their word for that, displaying a blithe faith in corporate beneficence that we don’t extend to far less intrusive yet more heavily regulated industries. Even if these companies are acting with the soundest moral code imaginable, there’s ultimately no foolproof way they can secure the data from falling into the hands of a foreign security service. Closer to home, on a smaller yet no less troubling scale, there are often few protections to stop an individual analyst with access to such data from tracking an ex-lover or a victim of abuse.
A DIARY OF YOUR EVERY MOVEMENT
The companies that collect all this information on your movements justify their business on the basis of three claims: People consent to be tracked, the data is anonymous and the data is secure.
None of those claims hold up, based on the file we’ve obtained and our review of company practices.
Yes, the location data contains billions of data points with no identifiable information like names or email addresses. But it’s child’s play to connect real names to the dots that appear on the maps.
Here’s what that looks like.
The data included more
than 10,000 smartphones tracked
in Central Park.
Here is one smartphone, isolated
from the crowd.
Here are all pings from
that smartphone over the period covered by the data.
Connecting those pings reveals a diary of the person’s life.
Note: Driving path is inferred. Data has been additionally obscured. Satellite imagery: Maxar Technologies, New York G.I.S., U.S.D.A. Farm Service Agency, Imagery, Landsat/Copernicus and Sanborn.
In most cases, ascertaining a home location and an office location was enough to identify a person. Consider your daily commute: Would any other smartphone travel directly between your house and your office every day?
Describing location data as anonymous is “a completely false claim” that has been debunked in multiple studies, Paul Ohm, a law professor and privacy researcher at the Georgetown University Law Center, told us. “Really precise, longitudinal geolocation information is absolutely impossible to anonymize.”
“D.N.A.,” he added, “is probably the only thing that’s harder to anonymize than precise geolocation information.”
[Work in the location tracking industry? Seen an abuse of data? We want to hear from you. Using a non-work phone or computer, contact us on a secure line at 440-295-5934, @charliewarzel on Wire or email Charlie Warzel and Stuart A. Thompson directly.]
Yet companies continue to claim that the data are anonymous. In marketing materials and at trade conferences, anonymity is a major selling point — key to allaying concerns over such invasive monitoring.
To evaluate the companies’ claims, we turned most of our attention to identifying people in positions of power. With the help of publicly available information, like home addresses, we easily identified and then tracked scores of notables. We followed military officials with security clearances as they drove home at night. We tracked law enforcement officers as they took their kids to school. We watched high-powered lawyers (and their guests) as they traveled from private jets to vacation properties. We did not name any of the people we identified without their permission.
The data set is large enough that it surely points to scandal and crime but our purpose wasn’t to dig up dirt. We wanted to document the risk of underregulated surveillance.
Watching dots move across a map sometimes revealed hints of faltering marriages, evidence of drug addiction, records of visits to psychological facilities.
Connecting a sanitized ping to an actual human in time and place could feel like reading someone else’s diary.
In one case, we identified Mary Millben, a singer based in Virginia who has performed for three presidents, including President Trump. She was invited to the service at the Washington National Cathedral the morning after the president’s inauguration. That’s where we first found her.
She remembers how, surrounded by dignitaries and the first family, she was moved by the music echoing through the recesses of the cathedral while members of both parties joined together in prayer. All the while, the apps on her phone were also monitoring the moment, recording her position and the length of her stay in meticulous detail. For the advertisers who might buy access to the data, the intimate prayer service could well supply some profitable marketing insights.
“To know that you have a list of places I have been, and my phone is connected to that, that’s scary,” Ms. Millben told us. “What’s the business of a company benefiting off of knowing where I am? That seems a little dangerous to me.”
Like many people we identified in the data, Ms. Millben said she was careful about limiting how she shared her location. Yet like many of them, she also couldn’t name the app that might have collected it. Our privacy is only as secure as the least secure app on our device.
“That makes me uncomfortable,” she said. “I’m sure that makes every other person uncomfortable, to know that companies can have free rein to take your data, locations, whatever else they’re using. It is disturbing.”
[Related: What’s the Worst That Could Happen With My Phone Data? — Our journalists answers your questions about their investigation into how companies track smartphone users.]
The inauguration weekend yielded a trove of personal stories and experiences: elite attendees at presidential ceremonies, religious observers at church services, supporters assembling across the National Mall — all surveilled and recorded permanently in rigorous detail.
Protesters were tracked just as rigorously. After the pings of Trump supporters, basking in victory, vanished from the National Mall on Friday evening, they were replaced hours later by those of participants in the Women’s March, as a crowd of nearly half a million descended on the capital. Examining just a photo from the event, you might be hard-pressed to tie a face to a name. But in our data, pings at the protest connected to clear trails through the data, documenting the lives of protesters in the months before and after the protest, including where they lived and worked.
We spotted a senior official at the Department of Defense walking through the Women’s March, beginning on the National Mall and moving past the Smithsonian National Museum of American History that afternoon. His wife was also on the mall that day, something we discovered after tracking him to his home in Virginia. Her phone was also beaming out location data, along with the phones of several neighbors.
The official’s data trail also led to a high school, homes of friends, a visit to Joint Base Andrews, workdays spent in the Pentagon and a ceremony at Joint Base Myer-Henderson Hall with President Barack Obama in 2017 (nearly a dozen more phones were tracked there, too).
Inauguration Day weekend was marked by other protests — and riots. Hundreds of protesters, some in black hoods and masks, gathered north of the National Mall that Friday, eventually setting fire to a limousine near Franklin Square. The data documented those rioters, too. Filtering the data to that precise time and location led us to the doorsteps of some who were there. Police were present as well, many with faces obscured by riot gear. The data led us to the homes of at least two police officers who had been at the scene.
As revealing as our searches of Washington were, we were relying on just one slice of data, sourced from one company, focused on one city, covering less than one year. Location data companies collect orders of magnitude more information every day than the totality of what Times Opinion received.
Data firms also typically draw on other sources of information that we didn’t use. We lacked the mobile advertising IDs or other identifiers that advertisers often combine with demographic information like home ZIP codes, age, gender, even phone numbers and emails to create detailed audience profiles used in targeted advertising. When datasets are combined, privacy risks can be amplified. Whatever protections existed in the location dataset can crumble with the addition of only one or two other sources.
There are dozens of companies profiting off such data daily across the world — by collecting it directly from smartphones, creating new technology to better capture the data or creating audience profiles for targeted advertising.
The full collection of companies can feel dizzying, as it’s constantly changing and seems impossible to pin down. Many use technical and nuanced language that may be confusing to average smartphone users.
While many of them have been involved in the business of tracking us for years, the companies themselves are unfamiliar to most Americans. (Companies can work with data derived from GPS sensors, Bluetooth beacons and other sources. Not all companies in the location data business collect, buy, sell or work with granular location data.)
A Selection of Companies Working
the Location Data Business
A Selection of Companies Working
in the Location Data Business
Location data companies generally downplay the risks of collecting such revealing information at scale. Many also say they’re not very concerned about potential regulation or software updates that could make it more difficult to collect location data.
“No, it doesn’t really keep us up at night,” Brian Czarny, chief marketing officer at Factual, one such company, said. He added that Factual does not resell detailed data like the information we reviewed. “We don’t feel like anybody should be doing that because it’s a risk to the whole business,” he said.
In absence of a federal privacy law, the industry has largely relied on self-regulation. Several industry groups offer ethical guidelines meant to govern it. Factual joined the Mobile Marketing Association, along with many other data location and marketing companies, in drafting a pledge intended to improve its self-regulation. The pledge is slated to be released next year.
States are starting to respond with their own laws. The California Consumer Protection Act goes into effect next year and adds new protections for residents there, like allowing them to ask companies to delete their data or prevent its sale. But aside from a few new requirements, the law could leave the industry largely unencumbered.
“If a private company is legally collecting location data, they’re free to spread it or share it however they want,” said Calli Schroeder, a lawyer for the privacy and data protection company VeraSafe.
The companies are required to disclose very little about their data collection. By law, companies need only describe their practices in their privacy policies, which tend to be dense legal documents that few people read and even fewer can truly understand.
EVERYTHING CAN BE HACKED
Does it really matter that your information isn’t actually anonymous? Location data companies argue that your data is safe — that it poses no real risk because it’s stored on guarded servers. This assurance has been undermined by the parade of publicly reported data breaches — to say nothing of breaches that don’t make headlines. In truth, sensitive information can be easily transferred or leaked, as evidenced by this very story.
We’re constantly shedding data, for example, by surfing the internet or making credit card purchases. But location data is different. Our precise locations are used fleetingly in the moment for a targeted ad or notification, but then repurposed indefinitely for much more profitable ends, like tying your purchases to billboard ads you drove past on the freeway. Many apps that use your location, like weather services, work perfectly well without your precise location — but collecting your location feeds a lucrative secondary business of analyzing, licensing and transferring that information to third parties.
For many Americans, the only real risk they face from having their information exposed would be embarrassment or inconvenience. But for others, like survivors of abuse, the risks could be substantial. And who can say what practices or relationships any given individual might want to keep private, to withhold from friends, family, employers or the government? We found hundreds of pings in mosques and churches, abortion clinics, queer spaces and other sensitive areas.
In one case, we observed a change in the regular movements of a Microsoft engineer. He made a visit one Tuesday afternoon to the main Seattle campus of a Microsoft competitor, Amazon. The following month, he started a new job at Amazon. It took minutes to identify him as Ben Broili, a manager now for Amazon Prime Air, a drone delivery service.
“I can’t say I’m surprised,” Mr. Broili told us in early December. “But knowing that you all can get ahold of it and comb through and place me to see where I work and live — that’s weird.” That we could so easily discern that Mr. Broili was out on a job interview raises some obvious questions, like: Could the internal location surveillance of executives and employees become standard corporate practice?
Mr. Broili wasn’t worried about apps cataloguing his every move, but he said he felt unsure about whether the tradeoff between the services offered by the apps and the sacrifice of privacy was worth it. “It’s an awful lot of data,” he said. “And I really still don’t understand how it’s being used. I’d have to see how the other companies were weaponizing or monetizing it to make that call.”
If this kind of location data makes it easy to keep tabs on employees, it makes it just as simple to stalk celebrities. Their private conduct — even in the dead of night, in residences and far from paparazzi — could come under even closer scrutiny.
Reporters hoping to evade other forms of surveillance by meeting in person with a source might want to rethink that practice. Every major newsroom covered by the data contained dozens of pings; we easily traced one Washington Post journalist through Arlington, Va.
In other cases, there were detours to hotels and late-night visits to the homes of prominent people. One person, plucked from the data in Los Angeles nearly at random, was found traveling to and from roadside motels multiple times, for visits of only a few hours each time.
While these pointillist pings don’t in themselves reveal a complete picture, a lot can be gleaned by examining the date, time and length of time at each point.
Large data companies like Foursquare — perhaps the most familiar name in the location data business — say they don’t sell detailed location data like the kind reviewed for this story but rather use it to inform analysis, such as measuring whether you entered a store after seeing an ad on your mobile phone.
Location data is also collected and shared alongside a mobile advertising ID, a supposedly anonymous identifier about 30 digits long that allows advertisers and other businesses to tie activity together across apps. The ID is also used to combine location trails with other information like your name, home address, email, phone number or even an identifier tied to your Wi-Fi network.
The data can change hands in almost real time, so fast that your location could be transferred from your smartphone to the app’s servers and exported to third parties in milliseconds. This is how, for example, you might see an ad for a new car some time after walking through a dealership.
That data can then be resold, copied, pirated and abused. There’s no way you can ever retrieve it.
Location data is about far more than consumers seeing a few more relevant ads. This information provides critical intelligence for big businesses. The Weather Channel app’s parent company, for example, analyzed users’ location data for hedge funds, according to a lawsuit filed in Los Angeles this year that was triggered by Times reporting. And Foursquare received much attention in 2016 after using its data trove to predict that after an E. coli crisis, Chipotle’s sales would drop by 30 percent in the coming months. Its same-store sales ultimately fell 29.7 percent.
Much of the concern over location data has focused on telecom giants like Verizon and AT&T, which have been selling location data to third parties for years. Last year, Motherboard, Vice’s technology website, found that once the data was sold, it was being shared to help bounty hunters find specific cellphones in real time. The resulting scandal forced the telecom giants to pledge they would stop selling location movements to data brokers.
Yet no law prohibits them from doing so.
Location data is transmitted from your phone via software development kits, or S.D.Ks. as they’re known in the trade. The kits are small programs that can be used to build features within an app. They make it easy for app developers to simply include location-tracking features, a useful component of services like weather apps. Because they’re so useful and easy to use, S.D.K.s are embedded in thousands of apps. Facebook, Google and Amazon, for example, have extremely popular S.D.K.s that allow smaller apps to connect to bigger companies’ ad platforms or help provide web traffic analytics or payment infrastructure.
But they could also sit on an app and collect location data while providing no real service back to the app. Location companies may pay the apps to be included — collecting valuable data that can be monetized.
“If you have an S.D.K. that’s frequently collecting location data, it is more than likely being resold across the industry,” said Nick Hall, chief executive of the data marketplace company VenPath.
THE ‘HOLY GRAIL’ FOR MARKETERS
If this information is so sensitive, why is it collected in the first place?
For brands, following someone’s precise movements is key to understanding the “customer journey” — every step of the process from seeing an ad to buying a product. It’s the Holy Grail of advertising, one marketer said, the complete picture that connects all of our interests and online activity with our real-world actions.
Once they have the complete customer journey, companies know a lot about what we want, what we buy and what made us buy it. Other groups have begun to find ways to use it too. Political campaigns could analyze the interests and demographics of rally attendees and use that information to shape their messages to try to manipulate particular groups. Governments around the world could have a new tool to identify protestors.
Pointillist location data also has some clear benefits to society. Researchers can use the raw data to provide key insights for transportation studies and government planners. The City Council of Portland, Ore., unanimously approved a deal to study traffic and transit by monitoring millions of cellphones. Unicef announced a plan to use aggregated mobile location data to study epidemics, natural disasters and demographics.
For individual consumers, the value of constant tracking is less tangible. And the lack of transparency from the advertising and tech industries raises still more concerns.
Does a coupon app need to sell second-by-second location data to other companies to be profitable? Does that really justify allowing companies to track millions and potentially expose our private lives?
Data companies say users consent to tracking when they agree to share their location. But those consent screens rarely make clear how the data is being packaged and sold. If companies were clearer about what they were doing with the data, would anyone agree to share it?
What about data collected years ago, before hacks and leaks made privacy a forefront issue? Should it still be used, or should it be deleted for good?
If it’s possible that data stored securely today can easily be hacked, leaked or stolen, is this kind of data worth that risk?
Is all of this surveillance and risk worth it merely so that we can be served slightly more relevant ads? Or so that hedge fund managers can get richer?
The companies profiting from our every move can’t be expected to voluntarily limit their practices. Congress has to step in to protect Americans’ needs as consumers and rights as citizens.
Until then, one thing is certain: We are living in the world’s most advanced surveillance system. This system wasn’t created deliberately. It was built through the interplay of technological advance and the profit motive. It was built to make money. The greatest trick technology companies ever played was persuading society to surveil itself.
Stuart A. Thompson (stuart.thompson@nytimes.com) is a writer and editor in the Opinion section. Charlie Warzel (charlie.warzel@nytimes.com) is a writer at large for Opinion.
Lora Kelley, Ben Smithgall, Vanessa Swales and Susan Beachy contributed research. Alex Kingsbury contributed reporting. Graphics by Stuart A. Thompson. Additional production by Jessia Ma and Gus Wezerek. Note: Visualizations have been adjusted to protect device owners.
Opening satellite imagery: Microsoft (New York Stock Exchange); Imagery (Pentagon, Los Angeles); Google and DigitalGlobe (White House); Microsoft and DigitalGlobe (Washington, D.C.); Imagery and Maxar Technologies (Mar-a-Lago).
Like other media companies, The Times collects data on its visitors when they read stories like this one. For more detail please see our privacy policy and our publisher’s description of The Times’s practices and continued steps to increase transparency and protections.
GDPR and Recruitment
Frequently Asked Questions
The General Data Protection Regulation (GDPR) will greatly impact the way companies recruit globally. We have answered some of the most urgent questions recruiters have about what this means for them and what they need to do now to prepare.
GDPR | Basic Information
Does the GDPR also govern the personal data of Non-EU citizens living in the EU?
Yes, the regulation applies to the processing of personal data of data subjects who are physically in the European Union.
Does the looming Brexit have any immediate effect on how companies in the UK must or need not be GDPR-compliant?
It is true that once Brexit is final, GDPR will not have any immediate authority in the UK. However, the Information Commissioner’s Office (ICO), the British data protection authority, is working on legislation referencing the GDPR, making it very likely that companies within the UK will still be under this legislation or a very similar one. Furthermore, you will almost certainly be receiving applications from EU citizens, making it critical for your business to be GDPR-compliant.
Is the GDPR valid for all data obtained and processed after May 25, 2018, or does it also impact the data I already own, i. e. in my existing talent pools?
Consent
Can a candidate give consent by including a note in their CV or application letter, stating that they agree to have their data stored and processed?
The issue here is that the candidate does not know how you will process their data, i. e. they don’t know where you will store the data, with whom you will share the data, who will get access to the data. Therefore, such a statement is valid provided that the candidate knows about your data processing. Example: If a candidate includes a signed note with the URL of the privacy policy and explicitly states when he/she has seen it and consented to it, this will be considered legitimate consent.
Where the processing is based on consent, you shall be «able to demonstrate that the data subject has consented to processing of his or her personal data». So a written declaration is highly recommended.
Do I need to make explicit while obtaining consent how I will process the candidate data?
Yes, you do need to be clear and transparent.
How do I obtain consent when I am not using an ATS?
This is a challenge as you need to prove that you get the consent. So written documentation signed by the candidate will be requested.
While obtaining consent from a candidate, do I have to communicate with him or her in a specified language, i. e. their national language?
There is no provision in the GDPR obliging you to localise your privacy policy. However, the candidate needs to understand exactly what you mean. While almost anybody understands English, it can be beneficial to translate your privacy policy into local languages to make sure that the consent you obtain is valid. It is also advisable to use simple language that is easy to understand for candidates who aren’t lawyers.
How do you obtain consent from candidates who hand you their CVs or apply directly at a job fair?
You will need to find a process to document their consent. For example, make sure that you have a standard form signed by each of them, keep this form in your files, delete this data once the candidate requests for deletion, etc. Using a technology solution like the SmartRecruiters Field Recruiting App will support your efforts to be compliant.
How specific do I need to be when stating the purpose of obtaining and processing my candidate data within my privacy policy?
The candidate needs to have accurate information. You need to state for which purpose you will use the data, who will get access to the information (listing internal and third parties), explain the rights of the candidates, who they can contact if they have complaints, etc.
How do I comply with GDPR rules for employee referrals, seeing as referrals rarely give their consent before being approached?
If you approach an individual, you should make sure that you have legimate interest (i. e. job offer). If yes, you must obtain written consent from the candidate and allow the candidate to approve your privacy policy. Once obtained, you can process further.
Consent | Application
If a candidate applies via an ATS does this constitute consent?
Depending on your ATS, it might. If your ATS is set up in a specific way that helps it obtain and store consent, applying to a job will constitute consent. We advise that you refer to your ATS provider to make sure that they obtain consent in a GDPR-compliant way.
If a candidate responds to a sent message does this constitute consent?
No, it doesn’t. In order to be able to store and process candidate data you need to obtain explicit consent from these candidates, meaning you will have to enter them into a process to provably obtain consent for further action.
If giving consent to data processing is a necessary condition for being allowed to apply to my job, does this constitute discrimination?
Under GDPR the consent is the first step to process the personal data of the candidate. Therefore, if you need to process the candidate’s data, you need to get his or her consent.
If an applicant sends an email or a letter containing their application, does this imply consent to store and process their data?
No, it doesn’t. In order to be able to store and process candidate data you received via email or letter you need to obtain consent from these candidates, meaning you will have to enter them into a process to provably obtain consent for further action.
Am I still allowed to accept applications via letter or email?
Yes, you are. However, in order to be able to store and process this data you need to obtain consent from these candidates, meaning you will have to enter them into a process to provably obtain consent for further action.
How do I obtain consent from candidates who apply through an advertisement on a job board?
As the data controller (future employer), you are responsible for obtaining consent. You will have to enter them into a process to provably obtain consent for further action. However, the job boards should be GDPR compliant as well. We urge you to review the terms of use and data privacy of the job boards.
How do I obtain consent from candidates who apply through my own careers page?
The consent of the candidates should be obtained. For example, you will have to build a check box proving that the candidate has read the privacy policy. By checking this box, the candidate gives his/her consent. The candidate should not go further in the process without consent.
Consent | Active Sourcing
Will active sourcing stay possible under the GDPR?
Yes, it will, but there are a few conditions to look out for. As a lawful basis for approaching a candidate you can claim that you have a so-called «legitimate interest» in growing your business by approaching a talent for a role as well as the candidate has a «legitimate interest» in being approached by you. Nevertheless, immediately after initiating contact you have to ask that candidate for their consent to you obtaining and processing their personal data.
In order to track passive candidates, am I allowed to store candidate data in my ATS before I get their consent?
Strictly speaking, no. However, to be pragmatic, you can claim «legitimate» interest when approaching them and immediately ask their consent for further data processing.
How do I ask passive candidates, for example on LinkedIn, for consent?
As a lawful basis for approaching a candidate you can claim that you have a so-called «legitimate interest» in growing your business by approaching a talent for a role as well as the candidate has a «legitimate interest» in being approached by you. Nevertheless, immediately after initiating contact you have to ask that candidate for their consent to you obtaining and processing their personal data.
Do I need to ask candidates for consent that have set their public profiles to display that they are actively looking for job opportunities?
Is it permissible to approach candidates whose profiles you found using a search engine?
Yes, if it is a public profile with a business background, making it permissible to assume «legitimate interest» when contacting a potential candidate.
If I contact a candidate about a job opportunity, does this opportunity need to be publicly advertised before I approach them?
No, not necessarily. The key thing is that you have a legitimate interest to contact him, i. e. you have a real job opportunity.
If a candidate accepts my request to connect on a business network, making their contact information visible, am I allowed to contact them?
In line with GDPR’s principles, you have the right to contact them if you have a legitimate interest, i. e. a job oppportunity. Furthermore, you shall get their consent and inform them of how you will process their personal data.
Is it still permissible to use sourcing tools that reveal candidates’ personal email addresses or phone numbers?
So far, it is still permissible. However, you need to have a legitimate interest to contact them, i. e. a job opportunity and you shall get their consent and inform them of how you will process the data. We encourage you to check the terms of use of such tools.
Is it permissible to store data that is publicly available, i. e. on a company’s home page?
Will I still be able to export candidate profiles from LinkedIn into my ATS?
In line with GDPR’s principles, you will need to make sure whether you have the right to export such profiles (please check the terms of use that you have with such providers). Furthermore, you need to have a legitimate interest to export such data (i. e. job opportunity) and you need to get the individual’s consent.
Is it permissible to store data of actively sourced candidates in an Excel sheet?
Yes, provided that you have legitimate interest for each sourced candidate (i. e. a job opportunity) and you make sure that you have documented the consent for each of them.
Candidate Rights
Right to Access Data
How can I make sure that candidates can access their data?
There are two ways to allow candidates access into their own data:
1) By appointing a designated contact for any candidate requests and sharing their contact information. Candidate requests to access, amend or erase their data need to be heeded within a narrow time frame and compliance must be documented.
2) By employing an ATS or CRM that will allow candidates to log onto their profiles and make any necessary adjustments by themselves. This option has the added bonus of making it easy to retain and log any occurring changes.
Right to Be Forgotten (Erase)
If a candidate states that they are not interested in a job opportunity, am I still able to keep their name in my database?
Yes, if the candidate gives you the authorisation to keep the name in your data base. You shall inform the candidate what you will do with the data after rejection.
If I approach a candidate who I actively sourced, but they do not want their data stored and are not interested in your role, how do I then ensure I or my colleagues do not contact them again?
You shall make sure that this information is spread out accross your organisation. Your process shall guarantee the treatment in line with the candidate’s request. To be strict, you should talk personally to every employee, making sure the data is deleted. As this is quite hard to achieve, it is advisable to ask for consent to keep the contact information in order to document the opt-out.
Requests from Candidates
When appointing a contact for candidate requests regarding their data, what contact data needs to be shared exactly?
You shall at least share a direct email address and a post address.
Data Processing
Is it permissible to store candidate data on personal laptops, for example by hiring managers?
You need to make sure that the candidate knows about such storage.
Is it permissible to share candidate data with my colleagues who will take part in job interviews?
You need to clearly state within your privacy policy with whom you will share the candidates’ personal data. If you have specified within your privacy policy that you will share this data with colleagues who are direct participants in the hiring process, such as the hiring manager, a future superior or a future colleague, this is permissible. You do not have to personally name these colleagues.
Do candidates need to be made aware of the fact that their data has been shared, for example with the hiring manager?
If you have specified within your privacy policy that you will share their data with employees who are directly involved in the hiring process, you do not have to make the candidate aware of every person you share their data with. However, if you want to share their data with an external vendor who is not named within your privacy policy, for example to run an assessment test, you need to obtain consent.
How long am I allowed to store candidate data?
Interpreting the GDPR in a very strict sense, you are only allowed to keep candidate data for as long as it serves the purpose you named when obtaining it. Once that purpose disappears, you are obliged to erase the data. However, it is up to you how you phrase that purpose. For example, stating that you will keep the candidate data «as long as a candidate is interested in positions within your organisation» gives you some leeway on how long you will be able to keep the data. In this case, you have to be able to prove that this candidate is, in fact, interested in staying within your talent pool.
Am I allowed to ask candidates to renew their consent to retain their data?
Yes, you are. If you have consent to store and process your candidates’ data and they have not explicitly banned you from contacting them, you may approach them to renew their consent in your data processing activities.
Is there a maximum limit for how long I am allowed to store candidate data?
Third party vendors
When you receive applicant data from recruitment agencies, is there a need for a data processing agreement between the recruitment agency and your own company?
Yes, there is a need for a so-called Data Processing Agreement (DPA).
Is there an official GDPR seal of quality for compliant vendors?
As of yet, there is no seal for GDPR compliance. The regulation does include the possibility for official certification that can be given either by the national data protection authority or from a competent private data protection authority. No accreditation of such a seal has taken place yet, as we await that the criteria of accreditation be specified.
Who is responsible for GDPR-compliance when sourcing candidates via job boards?
The data controller (future employer) is responsible. However, the job boards need to be GDPR compliant as well. We urge you to review the terms of use and data privacy of the job boards.
Who is responsible for GDPR-compliance when sourcing candidates via CV databases?
Generally speaking, if the database is hosting candidate profiles, it is their responsibility, as they are the data controller, to make sure that they are GDPR-compliant and have obtained the necessary consent to share the candidate profiles with you. However, as you will become the data controller once the candidate profiles are duplicated within your systems, it is certainly advisable to check back with your vendors on their efforts to become compliant.
Documentation
How do you prove which version of the Data Privacy Policy the candidate accepted?
Our solution allows you to add the date of the version that the candidate will accept. The candidate therefore accepts the version in force on the day of his or her consent. As far as SmartRecruiters is concerned, we archive our privacy statements («Privacy policy») so it is possible to find the version accepted for the candidate. In addition, if the client has added a link to their own privacy policy, it is the client’s responsibility to keep a copy of these declarations in order to keep track of them.
Who controls whether or not candidate data is truly deleted from our systems?
In the case of an audit, you need to be able to prove that you have complied with your candidates’ requests to delete their data. Our suggestion is to appoint a Data Protection Officer (DPO) within your company, who would be tasked with running internal audits and ensuring GDPR compliance.
Is your recruiting data GDPR compliant?
Download our SmartPaper for an in-depth overview of the GDPR and its potential impact on your recruiting data.
You’ll also learn:
Prepare Data for Exploration Coursera Quiz Answers
All Weeks Prepare Data for Exploration Coursera Quiz Answers
This is the third course in the Google Data Analytics Certificate. These courses will equip you with the skills needed to apply to introductory-level data analyst jobs. As you continue to build on your understanding of the topics from the first two courses, you’ll also be introduced to new topics that will help you gain practical data analytics skills.
You’ll learn how to use tools like spreadsheets and SQL to extract and make use of the right data for your objectives and how to organize and protect your data. Current Google data analysts will continue to instruct and provide you with hands-on ways to accomplish common data analyst tasks with the best tools and resources.
Prepare Data for Exploration Coursera Quiz Answers
Prepare Data for Exploration Week 01 Quiz Answers
L2 Differentiate between data structures:
Practice Quiz-2 Answers
L3 Generating data:
Practice Quiz-3 Answers
L4 Explore data types, fields, and values:
Q3. Fill in the blank: Internet search engines are an everyday example of how Boolean operators are used. The Boolean operator _____ expands the number of results when used in a keyword search.
Prepare Data for Exploration Weekly challenge 1 Answers
Prepare Data for Exploration Week 02 Quiz Answers
Practice Quiz-1 Answers
L2 Unbiased and objective data:
Practice Quiz-2 Answers
L3 Explore data credibility:
Q4. A data analyst is analyzing sales data for the newest version of a product. They use third-party data about an older version of the product. For what reasons is this inappropriate for their analysis? Select all that apply.
Practice Quiz-3 Answers
L4 Understand data ethics and privacy:
Practice Quiz-4 Answers
L5 Explaining open data:
Prepare Data for Exploration Weekly challenge 2 Answers
Q3. Which of the following are qualities of unreliable data? Select all that apply.
Prepare Data for Exploration Week 03 Quiz Answers
Practice Quiz-1 Answers
Accessing different data sources:
Practice Quiz-2 Answers
L2 Working with databases:
Q3. What is the difference between a primary key and a foreign key?
Practice Quiz-3 Answers
L3 Managing data with metadata:
Q3. A large metropolitan high school gives each of its students an ID number to differentiate them in its database. What kind of metadata are the ID numbers?
Practice Quiz-4 Answers
L5 Sorting and filtering:
Q3. A data analyst is reviewing a national database of real estate sales. They are only interested in sales of condominiums. How can the analyst narrow their scope?
Practice Quiz-5 Answers
L6 Working with large datasets in SQL:
Prepare Data for Exploration Weekly challenge 3 Answers
Prepare Data for Exploration Week 04 Quiz Answers
Practice Quiz-1 Answers
L2 Effectively organize data:
Practice Quiz-2 Answers
L3 Securing data:
Prepare Data for Exploration Weekly challenge 4 Answers
Prepare Data for Exploration Week 05 Quiz Answers
Practice Quiz-1 Answers
Course challenge:
Scenario 1, questions 1-5
Q1. You’ve been working at a data analytics consulting company for the past six months. Your team helps restaurants use their data to better understand customer preferences and identify opportunities to become more profitable.
To do this, your team analyzes customer feedback to improve restaurant performance. You use data to help restaurants make better staffing decisions and drive customer loyalty. Your analysis can even track the number of times a customer requests a new dish or ingredient in order to revise restaurant menus.
Currently, you’re working with a vegetarian sandwich restaurant called Garden. The owner wants to make food deliveries more efficient and profitable. To accomplish this goal, your team will use delivery data to better understand when orders leave Garden, when they get to the customer, and overall customer satisfaction with the orders.
Before project kickoff, you attend a discovery session with the vice president of customer experience at Garden. He shares information to help your team better understand the business and project objectives. As a follow-up, he sends you an email with datasets.
Click below to read the email: C3 Scenario 1_Client Email.pdf
And click below to access the datasets:
Course 3 Final Challenge Data Sets – Customer survey data (1).csv
Course 3 Final Challenge Data Sets – Delivery times_distance (1).csv
Q2. Next, you review the customer satisfaction survey data:
CustomerSurveyData – Customer survey data.csv
Q3. Now, you review the data on delivery times and the distance of customers from the restaurant:
DeliveryTimes_DistanceData – Delivery times_distance.csv
Q5. Now that you’re familiar with the data, you want to build trust with the team at Garden.
Scenario 2, questions 6-10
Q6. You’ve completed this program and are interviewing for a junior data scientist position at a company called Sewati Financial Services.
Click below to review the job description:
So far, you’ve successfully completed the first interview with a recruiter. They arrange your second interview with the team at Sewati Financial Services.
Click below to read the email from the human resources director:
Course 3 Scenario 2_Second Interview Email.pdf
You arrive 15 minutes early for your interview. Soon, you are escorted into a conference room, where you meet Kai Harvey, the senior manager of strategy. After welcoming you, he begins the behavioral interview.
Q6. Consider and respond to the following question. Select all that apply.
Q7. Consider and respond to the following question. Select all that apply.
Q8. Consider and respond to the following question. Select all that apply.
Our analysts often work with the same spreadsheet, but for different purposes. How would you use filtering to help in this situation?
Next Course Quiz Answers >>
All Course Quiz Answers of Google Data Analytics Professional Certificate
Prepare Data for Exploration Coursera Course Review:
In our experience, we suggest you enroll in the Prepare Data for Exploration Coursera Course and gain some new skills from Professionals completely free and we assure you will be worth it.
Prepare Data for Exploration Coursera course is available on Coursera for free, if you are stuck anywhere between quiz or graded assessment quiz, just visit Networking Funda to get Prepare Data for Exploration Coursera Quiz Answers
Conclusion:
I hope this Prepare Data for Exploration Coursera Quiz Answers would be useful for you to learn something new from this Course. If it helped you then don’t forget to bookmark our site for more Coursera Quiz Answers.
This course is intended for audiences of all experiences who are interested in learning about Data Analytics in a business context; there are no prerequisite courses.

Excel Fundamentals for Data Analysis Coursera Quiz Answers
Get Excel Fundamentals for Data Analysis Coursera Quiz Answers
As data becomes the modern currency, so the ability to analyse the data quickly and accurately has become of paramount importance. Excel with its extraordinarily broad range of features and capabilities is one of the most widely used programs for doing this. In the first course of our Excel Skills for Data Analysis and Visualization Specialization, you will learn the fundamentals of Excel for data analysis.
When you have completed the course, you will be able to use a range of Excel tools and functions to clean and prepare data for analysis; automate data analysis with the help of Named Ranges and Tables; and use logical and lookup functions to transform, link and categorise data.
Excel Fundamentals for Data Analysis Coursera Quiz Answers
Week 1: Excel Fundamentals for Data Analysis
Quiz 1: Excel functions for combining text values
Q1. The worksheet contains the following data:
Q2. The worksheet contains the following data:
Q3. The worksheet contains the following data:
Q4. The worksheet contains the following data:
Zara wants to join the value in B2 and the value in C2 separated by a space to get HSBC Shanghai. She enters the following formula:
Quiz 2: Functions that split text data
Q1. The worksheet contains the following data:
Q2. The worksheet contains the following data:
Q3. Cell B2 holds the data: BUSA3015
Q4. The workbook contains the following data:
Q5. The workbook contains the following data:
Quiz 3: Combining text functions
Q1. The worksheet contains the following data:
Q2. The worksheet contains the following data:
Q3. The worksheet contains the following data:
Q4. The worksheet contains the following data:
Q5. The worksheet contains the following data:
Quiz 4: Cleaning data and changing case
Q1. The worksheet contains the following data:
What will the formula: =TRIM(B2) return?
Q2. The worksheet contains the following data:
Q3. What does the CLEAN function do?
Q4. The worksheet contains the following data:
Q5. The worksheet contains the following data:
What will the formula: =CONCATENATE(PROPER(B2),” “,B3) return?
Quiz 5: Removing and replacing text characters
Quiz 6: Cleaning and manipulating text: Test your skills
Q1. To do this assessment you should download this Excel workbook, follow the instructions, and answer the questions.
C1W1 Assessment
XLSX File
Download file
In column D we need a formula to generate the full name.
Which of the following formulas would return the result Tina DE SIATO in cell D23?
Q3. If we wanted to change the formula in D4 to return Stevie BACATA, which of the following formulas would achieve this?
Q4. If we entered the calculation =PROPER(CONCAT(C4,” “,UPPER(B4))) in D4, what result would it return?
Q5. Fill in the Email column. The format for the email addresses is First Name, a “.”, Last Name, followed by @pushpin.com. It should all be in lower case. For example, Stevie Bacata’s email address is [email protected]
Which of these formulas would create the email addresses correctly? Multiple options may be correct.
Q7. Which formula could we use to return the extension number (the last four digits in column K) for each staff member?
Q8. We would like just the first letter of North or West to indicate the wing in column N. Which formula will give the desired result?
Q9. Since the wings are West and North, which have different numbers of characters, we cannot hard code the number 4. We will need to use a formula that changes with the wing.
What would be the result of the formula: =FIND(” “,K23)?
Q10. We would like to return the full word of the wing. What formula will give the desired result?
Q11. There are a few names that need to be cleaned up. Bob Decker (row 22) has an unusual character at the end of his name. What is the code for this character?
Week 2: Excel Fundamentals for Data Analysis
Quiz 1: Converting Data with VALUE and TEXT
Q1. The worksheet contains the following data:
Q2. The worksheet contains the following data:
Q3. The worksheet contains the following data:
Q4. The worksheet contains the following data:
Q5. The worksheet contains the following data:
What will be the result of the following formula:
Quiz 2: Understanding dates and basic date functions
Q3. Assume that today is the 21st of January, 2020 and we are using the day/month/year format for dates. What will be the result of: =TODAY()-7
Q5. The worksheet contains the following data:
Q6. Cell N3 currently holds the value: 1/07/2019 (see below), the date that a payment was due (1st of July, 2019).
Quiz 3: Generating Valid Dates using the DATE function
Q1. The worksheet contains the following data:
Q2. The worksheet contains the following data:
Q3. The worksheet contains the following data:
Q4. The worksheet contains the following data:
Q5. The worksheet contains the following data:
Quiz 3: Calculations with Dates and using DAYS, NETWORKDAYS and WORKDAY
Q3. An invoice is due 10 working days after the invoice date. Which of the following would be an appropriate formula, to get the due date, if the invoice date is 3/01/2020 and this value is held in cell D2, and given that the country has a 3-day weekend of Saturday, Sunday, and Monday?
Quiz 4: More sophisticated date calculations with EOMONTH and EDATE
Q1. Cell F3 holds the value: 3/04/2020, i.e., the 3rd of April, 2020.
Q2. Cell F3 holds the value: 3/04/2020, i.e., the 3rd of April, 2020.
Q3. Cell F3 holds the value: 3/04/2020, i.e., the 3rd of April, 2020.
Q4. Cell F3 holds the value: 3/04/2020, i.e., the 3rd of April, 2020.
Q5. Cell F3 holds the value: 3/04/2020, i.e., the 3rd of April, 2020.
Quiz 5: Working with numbers and dates: Test your skills
Q1. To do this assessment you should download this Excel workbook, follow the instructions, and answer the questions.
C1W2 Assessment
XLSX File
Download file
Populate Column B, Short ID, which is to contain the 4 digits of the Emp ID using the VALUE function. What is needed to replace XXXX in the formula for cell B7: =VALUE(RIGHT(XXXX))?
Q2. What is the formula in cell B27?
Q3. Input the following formula: =TEXT(G7,”DD/MMMM/YY”), into cell H7. What is the result?
Q4. Insert today’s date in cell F1, what is the formula?
Q5. Using the TODAY function, what is the most efficient formula to type in cell I1?
Q6. Using the NOW function, what is the most efficient formula to display the date and time in cell C1?
Q7. Using the DAY, MONTH, or YEAR functions, what is the most efficient formula that is required in cell F2?
Q8. Using the DAY, MONTH, or YEAR functions, what is the most efficient formula that is required in cell F3?
Q9. Using the DAY, MONTH, or YEAR functions, what is the most efficient formula that is required in cell F4?
Q10. Populate cell I4 using the DATE function. What is to replace XXXX in the formula: =DATE(XXXX,F3,F2)?
Q11. Populate cell I7 using the DAYS function. What is to replace XXXXX in the formula: =DAYS(XXXXX)?
Q12. Using the NETWORKDAYS function, populate cell J7. What is the most efficient formula that is required?
Q14. Populate cell I2 using the EOMONTH function. What is the formula that is needed to do this most efficiently?
Q15. Populate cell I3 using the EDATE function. What is the formula that is needed to do this most efficiently?
Week 3: Excel Fundamentals for Data Analysis
Quiz 1: Cell Referencing and Naming
Quiz 2: Defined Names and Create from Selection
Q3. The worksheet contains the following data:
Quiz 3: Managing Names
Quiz 4: Calculations with Named Ranges
Quiz 5: Automating Data Validation with Named Ranges
Quiz 6: Defined Names for working more effectively with data: Test your skills
Q1. To do this assessment you should download this Excel workbook, follow the instructions, and answer the questions.
C1W3 Assessment
XLSX File
Download file
Have a look at the Travel expense calculator worksheet. Note there are quite a few errors. Start by addressing the problem of the missing exchange rates by naming the ranges. Go to the Currency Rates worksheet and use Create from Selection to name all the rates using the labels in column A.
What value is now showing for Total Other Expenses in K6?
Q2. While the calculation of Other Expenses is looking better it is still not correct. Open the Name Manager.
Have a look at the named ranges for Ex_Rate and Other, they only go to row 14, which explains the incorrect calculation. Edit Ex_Rate to go from L11:L21 and change Other to go from J11:J21. Click OK and close the Name Manager.
Other Expenses has been corrected. What is the total for Other as shown in K6?
Q3. Let’s fix Total Transportation Costs next. Open the Name Manager, there is a named range called Travel_Costs, but this is the wrong name. Change it to TravelCosts and click OK
What is the value for Transport as shown in K3 (one or two decimal places only)?
Q4. Next, Accommodation Costs, use any method you think suitable to give the name Accommodation_Costs to range F11:F21.
What is the total for Accommodation as shown in K4 (one or two decimal places only)?
Q5. And now to fix meals, let’s be efficient and use Create from Selection to name all three ranges simultaneously. Select G10:I21 and click Create from Selection.
What is the total cost of Meals as shown in K5 (one or two decimal places only)?
What is the summary value for London in the local currency? (no commas)
Q7. Stay on the Summary By Region worksheet. Enter a formula in C5 to add up the total amount spent in Paris (used the named range you just created). Then do the same in C6 for Jakarta.
Using the named range and the SUM function, what are the formulas to use here?
Note: you should have one answer for Paris and one answer for Jakarta. Just type the answer for one of these as your answer to this question.
Q8. Stay on the Summary By Region worksheet. In D5 create a calculation to convert Euros to Dollars by multiplying the Euros spent (C5) by the exchange rate for Euro (which uses the named range EUR). Perform a similar calculation to convert the Indonesian Rupees to dollar (using the correct named range).
What is the formula in D6?
Q9. Click in D7 (still in Summary By Region), and use Autosum to get the total spent in USD.
What is this value? (no commas, no characters, no dollar signs)
Q10. Click in B9 (still in Summary By Region), and use the Paste Names tool to Paste all the named ranges into your workbook.
What are the contents of cell C28? (Cut and paste your answer here to avoid errors.)
Q11. Look at the Travel expense calculator worksheet. What is the value of Total Meals in cell K5? (no characters or commas)
Q12. Still on the Travel expense calculator worksheet. What is the value of Total Other Expenses in cell K6? (no characters or commas and rounded to the nearest whole number with no decimal point)
Q13. How many rows are in your Name Manager?
Q14. In the Travel expense calculator sheet, what is the value of Total Trip Expenses in K7 (rounded to the nearest dollar, input as a number with no “$” and no commas)?
Q15. In the Travel expense calculator sheet, what is the value of L21 (to three decimal places)?
Excel Fundamentals for Data Analysis Course Review:
In our experience, we suggest you enroll in Excel Fundamentals for Data Analysis courses and gain some new skills from Professionals completely free and we assure you will be worth it.
Excel Fundamentals for Data Analysis course is available on Coursera for free, if you are stuck anywhere between a quiz or a graded assessment quiz, just visit Networking Funda to get Excel Fundamentals for Data Analysis Coursera Quiz Answers.
Conclusion:
I hope this Excel Fundamentals for Data Analysis Coursera Quiz Answers would be useful for you to learn something new from this Course. If it helped you then don’t forget to bookmark our site for more Quiz Answers.
This course is intended for audiences of all experiences who are interested in learning about new skills in a business context; there are no prerequisite courses.
Learning analytics and higher education: a proposed model for establishing informed consent mechanisms to promote student privacy and autonomy
Abstract
By tracking, aggregating, and analyzing student profiles along with students’ digital and analog behaviors captured in information systems, universities are beginning to open the black box of education using learning analytics technologies. However, the increase in and usage of sensitive and personal student data present unique privacy concerns. I argue that privacy-as-control of personal information is autonomy promoting, and that students should be informed about these information flows and to what ends their institution is using them. Informed consent is one mechanism by which to accomplish these goals, but Big Data practices challenge the efficacy of this strategy. To ensure the usefulness of informed consent, I argue for the development of Platform for Privacy Preferences (P3P) technology and assert that privacy dashboards will enable student control and consent mechanisms, while providing an opportunity for institutions to justify their practices according to existing norms and values.
Introduction
Big Data is a ‘cultural, technological, and scholarly phenomenon’ (Boyd & Crawford, 2012, p. 663) that transcends boundaries; consequently, researchers and pundits alike have had a hard time establishing a ‘rigorous definition’ (Mayer-Schönberger & Cukier, 2013, p. 6). Footnote 1 Big Data generally allows for ‘things one can do at a large scale that cannot be done at a smaller one, to extract new insights or create new forms of value’ due to new flows of data and information derived from observing human behaviors or information disclosures by individuals (Mayer-Schönberger & Cukier, 2013, p. 6). This has proven to be valuable in many contexts (e.g., commerce, national security, etc.), and higher education is now pursuing its own Big Data agenda to mine for insights into student behaviors, learning processes, and institutional practices using learning analytics technology.
Much like Big Data, there exists no commonly accepted definition of learning analytics (for sundry definitions, see Dawson, Heathcote, & Poole, 2010; van Barneveld, Arnold, & Campbell, 2012). However, it is often understood as ‘the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising [sic] learning and the environments in which it occurs’ (Long & Siemens, 2011, p. 33).
While emerging learning analytics practices hold some promise to improve higher education, they are morally complicated and raise ethical questions, especially around student privacy. Since learning analytics often rely on aggregating significant amounts of sensitive and personal student data from a complex network of information flows, it raises an important question as to whether or not students have a right to limit data analysis practices and express their privacy preferences as means to controlling their personal data and information.
I begin the paper with an overview of learning analytics. I follow this part with a discussion on privacy theory, especially as it relates to information control and how such controls support and extend individual autonomy. Informed consent has historically been the mechanism by which we try to control information about ourselves, so I consider its role in expressing our privacy preferences and its limitations in the age of Big Data. Next, I highlight the many ways students unknowingly disclose data and information to their institution and third parties without the ability to control such disclosures. Finally, I propose a model for establishing informed consent mechanisms to promote student privacy and autonomy using P3P technology and privacy dashboards in ways that balance student and institutional interests.
Big data and higher education
New pathways for higher education policy and the learning sciences are opening up due to the growth of interconnected databases in data warehouses. Many learning analytics advocates believe capturing, archiving, and analyzing student profiles and behaviors will lead to improved institutional decision making, advancements in learning outcomes for at-risk students, greater trust in institutions due to the disclosure of data, and significant evolutions in pedagogy, among other things (Long & Siemens, 2011). To support these ends, universities are actively aggregating student data to support an array of learning analytics initiatives, which I address in this section.
Opening the black box of learning with student data and learning analytics
A complex assemblage of information and educational technology drives colleges and universities, and it has brought about a new phenomenon: The datafication of learning (Mayer-Schönberger & Cukier, 2014a). Each bit and byte, once aggregated and analyzed, may hold potential to reveal impressive new insights into student learning behaviors and outcomes. In the hands of educators, data-based visualizations of how and what a student is learning can assist instructors to develop customized instructional strategies and curricula. Each student represents a potential source of data, and considering that 21 million students enrolled in American higher education institutions in 2012 (National Center for Education Statistics, 2013), universities have a latent trove of data ready for Big Data projects.
Beyond the individual student level, there also exists opportunities for institutions to share their disparate datasets (see Unizin, 2015) or even link data at a federal level (see Kolowich, 2013), which presents further opportunities for analytical insights at an even larger scale. Eleven research-intensive universities and two state systems are members of Unizin’s consortium, which according to its CEO and COO aims to ‘[p] articipate in the creation of the world’s largest learning laboratory’ by creating a ‘data pool’ that ‘would allow institutions to take a scholarly and practical approach to critical questions around student performance’ (Littleworth & Qazi, 2017). Joining the consortium provides an institution access to a data warehouse in which over 720,000 students may exist as data points. Footnote 2 At the time of this writing, the warehouse reportedly held all the data created in the consortium’s central learning management system, Canvas; however, there is a possibility to enhance analytics by aggregating data from other sources (e.g., admissions records) and Unizin tools (Qazi, 2017; Raths, 2016).
Learning management system analytics
The most common application of learning analytics technology is in the context of an institution’s learning management system (LMS). LMSs are traditionally used to support online or hybrid teaching environments, within which students interact with various learning objects and work collaboratively. For example, students take quizzes; submit assignments; read assigned materials, such as journal articles and other electronic texts (eTexts or eBooks); and interact with their peers in discussion forums and wikis.
Learning analytics systems capture student behaviors, which are commonly referred to as the ‘digital breadcrumbs’ students leave throughout the system within LMSs as students navigate and interact with their peers and the digital space (Norris, 2011). In the recent past, it was a ‘slow and cumbersome’ process to export LMS data for analysis, but it is increasingly the case that common LMS systems include data extraction tools alongside their analytic products (Brown, Dehoney, & Millichamp, 2015; Macfadyen & Dawson, 2010, p. 590). The analytics can descriptively detail the date, time, and duration of students’ digital movements, including if, when, and for how long they read an electronic text (e.g., eBook or PDF article) or took an online quiz. Other statistics detail a student’s overall completion rate of a course, whether or not a student is predicted to succeed in the course, and map the strength of a student’s peer-to-peer/peer-to-instructor network using social network analysis. LMSs embedded with learning analytics tools use data visualization techniques to create information dashboards from which instructors can infer how to intervene in a student’s education, while other systems allow students, themselves, the ability to monitor their own progress using similar dashboards. Footnote 3 Some systems automatically intervene with algorithms, which send status updates or e-mails to students and instructors alike, notifying both parties of potential problems.
LMS-based learning analytics are informed by student data from other campus systems, including commonly used student information systems (SISs). SISs hold a majority of the information students disclose on their applications for admission, their enrollment records, and their academic history. Over time, their digital records may be augmented with other information, including financial aid awards, involvement on campus, disciplinary and criminal reports, and personal health information.
eAdvising analytics
eAdvising systems are another area ripe for learning analytics. Austin Peay State University’s eAdvising system includes a recommendation engine that suggests courses based on students’ academic profiles and considers their course path with the past success of peers like them (Denley, 2012). Other eAdvising systems warn students when they stray from their chosen path, blocking them completely from registering for courses if they fail to return to a pre-determined set of courses; or if students are deemed to be ‘at risk,’ professional advisors give them priority advising attention (California State University Long Beach, 2014; Lewis, 2011; Parry, 2012).
eAdvising analytics rely heavily on data held within institutional SISs. The historical academic information, especially past ACT and SAT scores, alongside current academic information, such as course grades and enrollment records, are crucial for predictive eAdvising analytics systems. eAdvising systems, like Campus Labs’ Beacon system, pull supplemental data from sources like personality profiles, specialized entrance exams and surveys, and geolocation information from student ID card swipes or WiFi-connected device beacons. For example, Beacon’s survey questions automatically create an alert for resident assistants in campus housing if students indicate that they are having trouble making friends, and geolocation tracking information is available to advisors for them to assess a student’s engagement on campus (Campus Labs, 2014).
Institutional analytics
While learning analytics applications typically focus attention on individual courses and learners, there is a growing market for institution-wide analytic applications. Brightspace, Blackboard, and Instructure, all prominent educational technology companies, offer learning analytics solutions that allow institutional researchers and other administrators access to data and dashboards that compare student activity and learning metrics within and between courses, departments, and colleges across a university. Footnote 4
Institution-wide learning analytics afford administrators the ability to drill down into segmented and longitudinal student data. Doing so helps an institution develop reports concerning student performance with respect to learning outcomes, departmental performance measures, and instructor performance over time. These measures and more, some argue, help an institution and its individual departments respond to stakeholder pressures to demonstrate institutional effectiveness and more easily meet government reporting requirements (Glass, 2013; Long & Siemens, 2011).
Edge-case analytics using social and biometric data
Leading thinkers in the learning analytics field argue that a student’s ‘every click, every Tweet or Facebook status update, every social interaction, and every page read online’ leaves a ‘digital footprint’ (Long & Siemens, 2011, p. 32) that can ‘render visible’ (Buckingham Shum & Ferguson, 2012, p. 5) unseen social learning behaviors. This ‘smorgasbord’ (Diaz & Brown, 2012, p. 13) approach to data aggregation motivates novel approaches to learning analytics and encourages ‘fishing expeditions’ (Mayer-Schönberger & Cukier, 2013, p. 29) within the data for new insights and trends.
Learning analytics advocates have yet to demonstrate the efficacy of social analytics at scale, but emerging projects point to some potential uses. Some institutions are monitoring and mining their students’ use of Facebook (see Ho, 2011; Hoover, 2012), while other institutions even scan RFID chips in student IDs at lecture halls and classrooms in order to correlate attendance with classroom performance (Brazy, 2010; O’Connor, 2010). If universities track student movements using geolocation data and map interpersonal connections, they can begin to understand the social lives of students, their relationships, and the web of personal networks on campus, which Matt Pittinsky (formerly of Blackboard) believes is a ‘very useful layer of data …. [that shows] evidence of social integration’ (Parry, 2012, para. 57), an important indicator of academic success.
Institutions and researchers are also exploring the role of biometric data in learning analytics. Advocates of biometrics for learning analytics argue that measurements of a student’s ‘heart rate, body temperature, ambient luminosity, [location and movement],’ among other things can be useful for understanding attention, stress, and sleep patterns, which hold the potential to determine circumstances that impede or aid learning (Arriba Pérez, Santos, & Rodriguez, 2016, p. 43). When biometrics and the analytics resulting from them are shared with learners, initial research indicates that such information may help individuals self-regulate their attention behaviors (Spann, Schaeffer, & Siemens, 2017).
To these ends, the Bill and Melinda Gates Foundation, an outspoken proponent of data-driven education, funded the development of an ‘engagement pedometer,’ a biometric bracelet that tracks electrical charges in a student’s sympathetic nervous system (Simon, 2012). By way of analytics that analyze each bracelet’s data, instructors can see a student’s engagement level (or lack thereof) in real time. While this and other similar projects have not reached the mainstream, they foreshadow the role biometric data can play in learning analytics projects (see Alcorn, 2013; Schiller, 2015).
Learning analytics and privacy as control of one’s data and information
If institutions continue to develop data analytics projects and infrastructures in order to capture sensitive, comprehensive student data, the obligation to do so responsibly will increase as well. Even with noble and good ends in mind—namely improving learning (however defined)—learning analytics practices surveil and intervene in student lives. Consequently, learning analytics, like many Big Data practices, are rife with privacy problems and ethical quandaries, which continue to grow in complexity (Johnson, Adams Becker, Estrada, & Freeman, 2015).
The question then is whether or not those who design learning analytics systems and support its ends will provide students privacy protections. Evidence in the literature suggests that learning analytics highlight ‘blind spots’ (Greller & Drachsler, 2012, p. 50) in institutional policy and ‘poses some new boundary conditions’ (Pardo & Siemens, 2014, p. 442) around student data and privacy, which may negatively affect the future success of learning analytics if left unaddressed (Siemens, 2012). One such question concerns the degree to which students should control information about themselves; I turn to this for the remainder of the article.
Privacy as control of information
Big Data practices often raise significant privacy issues, which have sparked academic and public debate with fervor and intensity last seen in the 1970s when concerns erupted regarding government data banks (see Lyon, 2014; Marr, 2015). The rise of data collection in and of itself is concerning, but the advancing pace of predictive analytics and their role in public and private life pushes against accepted normative, ethical, and legal privacy boundaries in ways unforeseen and unknown (Crawford & Schultz, 2014). As such, the scholarly conversation surrounding Big Data and privacy, especially information privacy, is multifaceted and reflects various theories and approaches to privacy problems.
Privacy as a form of information control is a dominant theme in scholarly literature, serves as the basis for legal doctrine, and has informed important Supreme Court decisions (Nissenbaum, 2010; Solove, 2008). According to Alan Westin’s (1967, p. 7) seminal text, Privacy and Freedom, privacy is an individual’s ‘right to determine for themselves when, how, and to what extent information about them is communicated to others.’ A control approach to privacy assumes not that information is absent in others’ minds, but that we can determine who can access information about ourselves and limit to whom and under what conditions it is disclosed (Fried, 1968; Froomkin, 2000; Nissenbaum, 2010).
Privacy-as-control is biased towards individual choice and treats information as a part of one’s person. In many respects, individual information control treats personal information as a Lockean property right (Solove, 2008). By acknowledging that individuals have the right to choose how others can access and use their information, this privacy perspective advances the idea that information ‘flows naturally from selfhood’ (Solove, 2008, p. 26), thus ‘every Man has a Property in his own Person’ and that property should be respected as being part and parcel to one’s self (Locke, 1689, emphasis and capitalization in original).
Losing control
Big Data practices present unique issues that are dissolving our control over personal information. The technological mélange of ubiquitous sensors, devices, networks, and applications around and embedded in our lives continue to surreptitiously capture data about us. These data are valuable, which has prompted companies, institutions, and especially data brokers, whose under-regulated industry often fails to protect individuals against consequential data leaks (Roderick, 2014; see Cowley, Bernard, & Hakim, 2017), to build data-mining infrastructures.
When identifiable data are aggregated and analyzed, lives become more transparent to those with the data while their data practices grow more opaque and influential. This is what Richards and King (2013) call the Transparency Paradox. While we may wish to keep information private by expecting companies to deidentify data, the connected nature of databases and the power of analytic technologies often makes deidentification efforts futile (Ohm, 2010). Richards and King (2013) identify this as the Identity Paradox. And institutions and organizations continue to grow their privilege and power over individuals by exploiting their personal information, while the same individuals are left with few options to rein in flows of personal information. This is Richards and King’s (2013) final paradox: The Power Paradox.
The risk of each paradox would be lessened if individuals had more control over their personal information. However, institutional bureaucracy, corporate policy, and legal jargon adds to a Kafkaesque nexus that makes such information control processes unapproachable, much less useful (Solove, 2004; Tene & Polonetsky, 2013). Without some checks on personal information flows and the development of digital dossiers, individuals will have little say in how powerful entities use identifiable information (Solove, 2004).
Harms to autonomy
What is problematic about people losing control over their information is the effect Big Data and other data-driven practices have on autonomy (Goldman, 1999). Autonomous individuals are self-governing, which is to say that they are able to incorporate their ‘values and reasons’ (Rubel & Jones, 2016, p. 148) into rational decision-making processes according to their will (Kant, 1785). Society cares about protecting one’s autonomy because it ‘shows respect for the person’ (Marx, 1999, p. 63).
Autonomy and information privacy are often interlinked. According to Rubel and Jones (2016), three discrete types of connections exist between the concepts. First, privacy may be an object of autonomy, which is to say that individuals may choose to seek information privacy or not. Second, privacy may be a condition of autonomy. Here, privacy serves ‘a fundamental and ineliminable role’ (Alfino & Mayes, 2003, p. 6) in autonomy by protecting individuals from undue intrusions into spheres of life that could limit ‘individual conscience’ (Richards, 2008, p. 404)—such as developing intellectually, forming moral constructions, and assessing social values—or influence one’s decisions to the point that they are not fully one’s own (Bloustein, 1964; Reiman, 1976). Finally, privacy may promote autonomy. When organizations and institutions respect information privacy expectations and allow information to flow according to those expectations, they advance autonomist aims; however, when these same actors hide information, use information to deceive, or employ information practices to interfere and manipulate individual lives, they reduce autonomy.
The role of informed consent in expressing privacy choices
Informed consent, or ‘notice and choice,’ is the process by which individuals are notified of how a secondary party, such as organizations (like a business) or institutions (like a university), will use information about them (Tene & Polonetsky, 2013, p. 260). It also informs them of their rights to privacy, as well as the express rights the second party retains regarding the information. After being informed of rights and information practices, individuals can then choose whether or not to agree—to consent—to the terms in front of them and enter into a relationship with the second party or not. However, even though informed consent acts as ‘the gold standard for privacy protection’ (National Research Council, 2007, p. 48), it is not a panacea for privacy problems (Flaherty, 1999).
Rarely are individuals fully aware of what they are agreeing to. In our current data brokerage climate, Adam Moore (2010) argues that the benefits we gain from consent to one set of information-based services are far outnumbered by the harms that can accrue when the same information is sold later on. Furthermore, consent implies awareness of how our information will be used, but we can rarely envision the downstream uses, the unequal benefit to the second and third parties to whom we disclose information, and the potential consequences for our privacy (Hui & Png, 2006; Marx, 1999). Also concerning is the fact that informed consent procedures are usually biased towards those who seek out personal information. It is also often the case that individuals must choose to opt-out of inclusive information gathering practices, not opt-in, which produces the effect that more information is gathered than necessary.
Data miners do not shoulder full responsibility for the weaknesses of informed consent; some of it rests with individuals. Acquisti’s (2004) work on informed consent behaviors revealed that individuals desire immediate gratification and are more willing to opt-in to inclusive information practices in part because it requires them to do less work to protect their privacy and limit information disclosures. This want for gratification is more quickly satiated when companies provide a sense of control––even if this is not the case—that motivates individuals to consent (Brandimarte, Acquisti, & Loewenstein, 2013). Through this lens we can see how informed consent can become a predatory structure that does not benefit individuals nor promotes their ability ‘to make meaningful, uncoerced choices’ (Goldman, 1999, p. 103) through negotiation of information disclosure terms.
Big Data practices add additional challenges to informed consent mechanisms in ways that create informational and technological issues, some of which may be insurmountable. It is increasingly the case that informed consent continues to ‘[groan] under the weight’ of dynamic and complex assemblages of systems, information flows, and data-driven practices; consequently, new approaches to informed consent are necessary in the Big Data era if we are to recapture the value informed consent once held for protecting privacy (Barocas & Nissenbaum, 2014, p. 64). Going forward, I recommend a novel approach to improving informed consent after first illustrating the many ways students unwittingly disclose data and information about themselves to higher education actors.
Disclosing and using data without student consent
Historically, higher education institutions have failed to promote informed consent practices within and outside of classrooms, using paternalistic justifications to warrant their information practices (Connelly, 2000). But when students were recently asked about data practices in higher education, they made compelling statements in favor of personal data control and the need for fair and useful informed consent processes (Slade & Prinsloo, 2014). The discrepancy between what institutions think they can do with student data and what students expect is done with their data may ‘rupture the fragile balance of respect and trust upon which this relationship is founded’ (Beattie, Woodley, & Souter, 2014, p. 424). By highlighting information practices in higher education, this section details when and how students disclose data and information about themselves without ever being informed about the analytic purposes to which they may be put by their institution.
Comprehensive profiles
One driving motivation of those who advocate for learning analytic technologies is to understand how different populations of students learn. In order to accomplish this, institutions must develop comprehensive profiles about learners. For businesses who share the same goal, they look outward and purchase data profiles from data brokers. For higher education institutions, they look inward and mine the trove of information gleaned from admissions materials and applications.
By building data-rich student profiles, universities set the foundation on which to run analytical tests and develop predictions. Where admission offices are concerned, institutional actors can compare data profiles of applicants with segments of the existing student body to develop predictive scores of the applicant’s potential for success, and thus better inform the student enrollment process (Goff & Shaffer, 2014). After students enroll in their institution of choice, learning analytics technologies often correlate their digital and analog behaviors with specific segments of their respective profiles (e.g., GPA, race, gender, etc.); in fact, the efficacy of most learning analytics applications would markedly decrease if it were not for the ability to compare a student’s digital trails with the wealth of information acquired from admissions applications. And while data profiles borne from admissions applications are rich, they become even more so as other sources of student data are grafted on as students interact with institutional information systems.
The problem is that it is unlikely that higher education institutions fully inform their prospective students about how the details of their lives revealed on admissions applications will be used and by whom. Clearly, students expect that these applications will inform admissions decisions, but they fail to intuit downstream uses and institutions do not explicitly explain information practices that are reliant on this store of personal data. In fact, applications for admission, the point at which we may expect universities to establish informed consent, may not even express student privacy rights, especially with regard to information control; many institutions even claim a property right to prospective students’ information. Footnote 7 This practice is especially problematic considering that students may feel that they have no option but to reveal all of the sensitive details about their lives, as there is always the chance they will be denied admission if they fail to provide information.
Classroom disclosures
Besides the application for admission, students also reveal sensitive information about themselves by creating profiles on third-party applications their institutions and instructors often require them to use in courses. Students are not routinely informed of the ways in which the companies responsible for these learning platforms use and protect the information students disclose as users. Footnote 8 Consider the example of Piazza, a company that offers question and answer functionality as a stand-alone application or with direct integration into common LMSs. Over 750,000 students at 1,000 institutions in 70 countries use Piazza to share information about themselves, access course materials, and communicate with their peers, instructors, and teaching assistants (J. Gilmartin [Piazza representative], personal communication; Piazza, n.d.). Data derived from students—including disclosures about their class history, internships, majors, and expected graduation year—have helped Piazza to build a secondary service, Piazza Careers. This service enables technology companies to court students for jobs if they fit a specific profile, that is after the companies purchase access to Piazza Career’s store of student data-based analytics and other services (Piazza Careers, n.d.).
Higher education institutions often enter into contracts with third-party educational technology services in order to get access to useful teaching and learning applications; in return, educational technology companies get access to valuable student data. Some may assume that students are aware of already or can find out how these applications scrape user profiles for information to build secondary tools and services, but this is not accurate. While institutions often negotiate terms of service agreements on behalf of their students, the details of those agreements are opaque and not always readily or publicly accessible.
Simply because policies or memoranda of understanding exist that detail how student data should be used, we cannot assume that such agreements work to the benefit of students. In fact, a lack of transparency regarding these agreements and a failure to fully inform students about how third-party companies use their data raises immediate concerns and questions. It may be that institutions are withholding information about data practices to keep student privacy concerns at bay, concerns that could potentially derail beneficial contracts with vendors.
Universities may claim that hinderances to student information flows, like requiring informed consent, impede necessary institutional practices, like instruction or even day-to-day business activities. In fact, §99.31 of FERPA, the Family Educational Rights and Privacy Act, (1974), allows the institution to disclose private, identifiable student information—without informing students—to anyone within the institution who has a ‘legitimate educational interest’ or to a third party who provides ‘institutional services or functions,’ like an educational technology company. Footnote 9 But as we saw with the Piazza example, third parties can use student data for their own benefit.
Continuously tracked
With the rise of Big Data in higher education, universities will continually track a students’ digital and physical movements and activities, and students will unknowingly disclose information about themselves on a daily basis. What is most problematic about these types of data disclosures is that the technology that enables them seems benign and beneficial. Students are not aware of the complex web of data capture technologies that store, aggregate, and analyze their information. Yet, there are particular types of data tracking that students may—and arguably should—be informed about to empower them to make informed decisions in their life.
Tracking technologies that capture geolocation, temporal data, and metadata raise serious concerns. Systems that can map in real time (or closely to) students’ physical and/or digital location and the time of their movements or activities disturbs our normative expectations and riles up our concerns regarding ‘dataveillance’ (Clark, 1987). It is plausible that universities will use geolocation tracking to incentivize less social and more academically-oriented movements, like visiting the library, in order to improve learning outcomes. Footnote 10 And special categories of students may come under higher scrutiny than others, such as minorities who have received diversity scholarships or student-athletes who are already under constant surveillance where their social media is concerned (see Reed, 2013). In both cases, students may more closely regulate their behaviors due to concerns about how their data trails could be used against them (Hier, 2003).
Analytic technologies that assess a student’s social well-being and affective state may also impact a student’s expectation of privacy. Text mining, social network analysis, and biometric devices that observe and analyze data trails can monitor a students’ level of engagement with their courses, discover whether or not they are socially connected with peers, and reveal if they are experiencing emotional issues, which some argue justifies institutional overrides of individual privacy (Prinsloo & Slade, 2017; Sclater, 2016). In effect, it makes typically invisible states of being and doing highly visible to any number of institutional actors with access rights. Yet, anyone who has had the privilege of experiencing college would balk at these revelations, as these formative years are often a time for identity development and exploration, socially and intellectually. Students may rightfully be worried that the data and insights mined from it will become a part of their permanent educational record and lead to decontextualized decision making (see Mayer-Schönberger & Cukier, 2014b). As evidence to this point, Stanford University students discovered that their institution logged when they used their ID cards to unlock doors; this information led to student backlash, substantiating that these are not unfounded concerns (see Pérez-Peña, 2015).
Building an informed consent model for learning analytics
Institutions retain the freedom to develop policies and practices in support of student privacy: FERPA is the policy ‘floor’ and not the ‘ceiling’ of how institutions should regulate and safeguard student information flows (Family Policy Compliance Office, 2011, p. 5; Rubel & Jones, 2016). In this section, I propose that institutions should use these freedoms to develop a technologically-enhanced informed consent mechanism using data privacy dashboards built on top of a technical identity layer. This model, I argue, considers the weaknesses of informed consent in the age of Big Data, and it prompts institutions to explicitly justify how and when their information practices run afoul of existing norms in order to procure student consent.
The emerging student voice
From an institutional perspective, informed consent may run counter to the ends to which universities use learning analytics as a means. Recall the statistician’s mantra: More data, more power. Informed consent opens up opportunities for limited access to and limited coverage about student life; consequently, students may reduce the efficacy of learning analytics by expressing their privacy preferences for greater control over identifiable data (Danezis et al., 2014 in Hoel & Chen, 2016; Slade & Galpin, 2012).
In my conversations with institutional actors, both for other research projects on learning analytics and in my daily interactions with administrators and staff, this argument—that institutions need all available student data to act in students’ best interests—is often followed up with the position that students do not care about privacy in the first place, thus robust privacy protections are neither needed nor worth the effort. Emerging empirical evidence refutes this argument. Students are ‘weirded out’ by institutional surveillance (Roberts, Howell, Seaman, & Gibson, 2016, p. 8), have expressed support for informed consents processes (Roberts et al., 2016), are unaware of how their institution protects their privacy (Fisher, Valenzuela, & Whale, 2014), and argue that they should be able to limit data sharing for learning analytics (Ifenthaler & Schumacher, 2016).
Pushing forward with learning analytics without considering student privacy preferences—or ignoring such preferences all together—is foolhardy and morally suspect. I will not go as far to say that privacy-lite learning analytics initiatives are meant to do harm, in fact they are most likely well-intentioned but misplaced paternalistic actions (Jones, 2017). However, not considering student privacy preferences runs counter to norms of respecting individual autonomy and expressions thereof in choice making. In the long run, neglecting the emerging student voice weakens the foundation on which learning analytics are being developed (Beattie et al., 2014; Roberts et al., 2016). The question, then, is how to pursue informed consent mechanisms.
Informed consent in an age of big data
Big Data practices that disclose and capture data and information across contexts pose significant problems for informed consent. The volume of data and constant evolution of information flows makes it nigh impossible to effectively deploy informed consent mechanisms. Any hope that one’s identity is protected by anonymization practices is dashed by the fact that aggregating enough data can tell tales about one’s identity in ways that allow powerful actors to ‘control and steer’ individuals even without knowing their full identity (Gutwirth & De Hert, 2008, p. 289 in Barocas & Nissenbaum, 2014). Standard informed consent mechanisms cannot comprehensively detail the relationship between the data subject and the data miner, nor can they fully capture the attributes that characterize data and information flows; as such, their efficacy is limited (Barocas & Nissenbaum, 2014). However, there is still some hope for informed consent within some contexts—including higher education.
Big Data information flows are hard to track and manage. They create a web of connections between a variety of actors and entities in ways that often ignore norms, disregard transmission principles, and do not heed contextual values. But in universities, flows of student information are trackable, manageable, and—when given proper care—can maintain harmony with extant norms. The central problem is that higher education institutions have not evolved their identity infrastructures while building capacity for data warehousing and analytics. Universities need to advance these infrastructures before they can begin to educate students about the purposes of identifiable data flows and support student privacy preferences.
Maximizing the identity layer
If the goal is to promote student choice over how their identifiable data flows, to whom, under particular conditions, and towards specific ends, then the first step is to clearly attribute data to students. Once these connections are accurately made, students will have the opportunity to express their choice over how their data flows using technical means.
Some will argue that this is a poor starting point. They may state that identifiable data should not be gathered for learning analytics purposes without student consent in the first place. While this position has its merits, it is untenable. Institutions do need identifiable data for legitimate business and educational purposes. But more importantly, the default state of institutional infrastructures is to identify students, authenticate their credentials, and use those credentials to authorize access to a variety of systems.
Identity management technologies, such as active directory services and single sign-on protocols, serve as the gatekeepers to student information systems, online learning applications, and to a campus’s networks, among many other systems (Bruhn, Gettes, & West, 2003). These identity management systems create an identity layer in campus data infrastructures that connects identifiable students to flows of data and information. The default state of identification presents a significant opportunity to enhance the identity layer by adding on protocols that enable the expression of privacy preferences and forcing systems to respect such preferences downstream. The Platform for Privacy Preferences (P3P) protocol serves as a model for maximizing the existing identity layer.
The platform for privacy preferences (P3P) model
The World Wide Web Consortium (W3C) developed the Platform for Privacy Preferences (P3P) protocol in the early 2000s (W3C 2007). About the protocol, Lorrie Cranor (2003)—one of the lead architects of P3P—writes:
[P3P] specifies a standard computer-readable format for Web site privacy policies. P3P-enabled Web browsers read policies published in P3P format and compare them with user-specified privacy settings. Thus, users can rely on their agents to read and evaluate privacy policies on their behalf. Furthermore, the standardized multiple-choice format of P3P policies facilitates direct comparisons between policies and the automatic generation of standard-format human-readable privacy notices. (p. 50)
Lawrence Lessig (2006) generally describes P3P as a machine-readable protocol that enables technologies to communicate, assess, and respect individual privacy choices set in applications and digital tools. Users set their privacy preferences in their web browser; the browser, acting as the agent, interprets the privacy policies of the website; and the browser then determines whether or not the website respects users’ privacy preferences (Cranor, Egelman, Sheng, McDonald, & Chowdhury, 2008). When the policies are congruent with the preferences, the user engages with the website; but when the two are incongruent, the browser warns the user of the privacy preference mismatch, blocks the cookies, and requests user input for how to proceed. Researchers also expanded P3P to improve privacy policy accessibility using simplified language and browsable matrices, including standardized ‘nutrition label’ notices to transform privacy policies into intelligible, actionable information for users (Kelley, Bresee, Cranor, & Reeder, 2009).
P3P ultimately failed. Major web companies, such as Google, ended up routing around user privacy preferences with hacks and browsers, like Microsoft’s Internet Explorer (IE), never fully embraced the P3P protocol (Fulton, 2012). And even though IE did have some P3P capabilities, anecdotal evidence suggests that users were not fully aware of the privacy-enhancing capabilities (See Cranor, 2012a at footnote 38). Reflecting on the demise of P3P, Cranor (2012b) writes that a major reason for the low adoption rate of P3P stemmed from the fact that P3P was an optional, self-regulatory privacy standard without any teeth; there was simply little to no incentive to respect users’ privacy preferences. The protocol, however, was a technical achievement. It proved that individuals could set privacy preferences, web applications could communicate their privacy policies in intelligible ways, and users would be the final arbiters in choosing whether or not to disclose information about themselves.
We can imagine scenarios where P3P technology could regulate the flow of student information according to student expectations for learning analytics. For instance, in an eAdvising system that uses geolocation tracking to determine student interactions with learning spaces (e.g., libraries, writing and other tutoring centers), students may wish for either the data to not be retained at all or for such information to remain undisclosed to their advisors. Informed by P3P technology, that information would be held securely within the data warehouse and remain undisclosed to this particular actor type. Similarly, students may be ok with the disclosure of identifiable learning management system interaction data to instructors, but with the limitation that such data does not include their IP address. The P3P would interpret these rules, disclose the appropriate data, and withhold the restricted data accordingly. Footnote 11
Very little work has been done to date to capitalize on the existing identity layer to build P3P-like protocols; this is especially true for the United States. The work that has been accomplished has centered in Europe. Cooper and Hoel (2015) highlight Norwegian education, which at a national level adopted Feide, a federated identity management system, for use in primary, secondary, and higher education institutions. According to their report, ‘[the university] … register [s] and authenticate [s] their members[, and] the service providers define their access rules’ (p. 53). After the implementation of Feide, Connect, an interoperability layer, was added to enable secure data transfer using standardized APIs and support the expression of privacy preferences. When Norwegian students initiate relationships with third-party service providers through Connect, they voluntary consent to particularized data practices but retain the right to opt out. If students choose to opt out, the service provider is directed to delete identifiable data. The university may, as well, require students to consent to certain service providers and their data practices.
Building student privacy preferences into data dashboards
A Platform for Privacy Preferences (P3P) protocol provides the means by which student privacy preferences are respected, but it does not enable the process of informing students of information practices nor the ability to consent to such practices by setting privacy preferences. For that to occur, student privacy dashboards need to be built (like Connect), which can be integrated into existing data dashboards.
As previously mentioned, learning analytics technology shares its statistical findings and predictions with institutional actors through visualizations (e.g., charts, trend lines, etc.). But, in order to promote self-awareness and encourage reflection among learners, some proponents of learning analytics advocate for creating data dashboards specifically for students (Clow, 2012; Duval et al., 2012). Data dashboards enable self-management over learning, and they also serve as a model for how informed consent could be improved.
Improving existing data dashboards with privacy preference settings would provide a central location where students would be informed about information practices that use their data and give them opportunities to opt out of personal data flows, possibly at a granular level. Privacy-promoting dashboards could include improved matrices and so-called nutrition label privacy policies, like what was developed for P3P. With such applications, students could learn about identifiable data flows and the ends to which they are put, dictate how they are informed (e.g., e-mail or text) about new data flows, and use toggle-like switches to determine what aspects of their information and data should be used for very specific purposes. Furthermore, privacy elements of data dashboards could archive and provide simple access to relevant information policies, as well as important communications from their institution regarding privacy concerns (e.g., data breaches).
Foregrounding norms, values, and expectations
While student data dashboards with privacy preference setting affordances empower students control over their information, they also benefit universities. In some cases, institutions will need to set defaults that allow for particular types of information flows. In order to achieve these ends, higher education administrators should have the ability to turn off and on some student data controls, or deny certain choices altogether. Thus, we arrive at an important point: What justifies overriding student privacy preferences?
Like other Big Data practices, ‘the purposes to which data is being put [for learning analytics], who will have access to it, and how identities are being protected’ remains opaque to students (Sclater, 2014, p. 20). Opaque information practices breed distrust, interfere with the development of interpersonal relationships, and motivate individuals to guard information about themselves. So, we can expect that if higher education institutions continue to obfuscate and hide how they use student data for learning analytics, student backlash is likely to occur that will harm the progress of educational data-mining initiatives. When data dashboards inform students about how their institution uses their data and for what purposes, harmful opacity will be reduced by lessening concerns about worrisome abuses brought about by analytics and trust will remain in the ‘tripartite relationship between learner, teacher and educational institution’ (Beattie et al., 2014, p. 424).
Students will generally expect their university to use standard academic information and some personal information about them in order to administer instruction, provide resources, and operate the institution, among other things. However, the literature suggests that learning analytics are pushing—if not exceeding—norm boundaries in ways that make students uncomfortable with emerging data practices. Surveilling physical and digital student behaviors, for instance, are practices that do not track with normative expectations, nor are they clearly justifiable. These situations highlight when institutions have an opportunity to use data dashboards to inform students about the motivations behind edge-case learning analytics and seek consent. Students can then respond to institutional justifications by setting their privacy preferences in a data dashboard.
To maximize the utility of data dashboards built to support privacy, institutional efforts have to be made to educate students about the motivations driving educational data-mining practices and demonstrate how such practices are in alignment with the norms, values, and expectations of higher education. One such way to facilitate student privacy preferences and enable institutions to argue for more or less restrictions on student information flows is to embed a justified choice architecture into the dashboard (see Thaler, Sunstein, & Balz, 2012). Choice architecture would ‘nudge’ students towards particular privacy choices; at the same time, institutions could set default choices with a justifiable argument for why a particular choice is preferable. If it is the expectation that dashboards capture everything about how student data and information will be used and to what ends, dashboards will be unwieldy and overwhelm students with too many communications, and effectively void the usefulness of this informed consent mechanism. Justified choice architecture works against this particular problem.
Conclusion
In this article, I presented a position that learning analytics highlight existing privacy issues and present new ones related to students’ inability to control how institutions use data and information about themselves. By improving the existing technical identity layer with P3P technology and creating privacy dashboards that enable student privacy preference setting, I argued that 1) students will be more fully informed about how their institution uses identifiable data and information and to what ends, and 2) will gain purposeful controls over information flows. This proposed model of informed consent ultimately works to support student privacy and autonomy.
Some readers of this article may disagree with my conception of privacy-as-control, and I agree that there are a number of other fruitful ways to address student privacy as it relates to learning analytics (see Heath, 2014). However, I argue that the central question regarding student control over identifiable data remains crucial, especially given the increasingly sensitive ways that institutions use Big Data practices to direct and intervene in student lives. If individual autonomy is something we value in a society that espouses liberalism, we need to consider ways to support autonomy—informed consent is one such way.
An additional counterargument that this article may raise concerns the position that institutions do not need to seek informed consent at all. Some may argue that legal frameworks (e.g., FERPA) and regulatory processes (e.g., institutional review of research) nullify this obligation or already account for the potential harms. However, FERPA’s ‘legitimate educational interest’ loophole, which allows for nearly unfettered data aggregation, analysis, and disclosure to ‘school officials’ (institutional actors and, often, educational technology companies), requires no informed consent practices. Additionally, institutional review boards (IRBs) often view learning analytics projects as forms of assessment, program evaluation, or operational research; IRBs do not need to review these projects and do not require informed consent. Consequently, universities grant themselves an ‘ethical review waiver’ (Griffiths, 2017, p. 559). In summary, the structures in place are not motivating institutional actors to develop informed consent mechanisms (see Willis, Slade,, & Prinsloo, 2016). Inaction with regard to informed consent is not justifiable. Failing to develop some way of procuring consent, using either the model I proposed or otherwise, signals disrespect for students to live their lives according to their own values and in support of their interests.
The work I presented in this article is a conceptual model, so its efficacy is unknown and is inherently limited. Next, human computer-interaction researchers and interface designers could test the feasibility and potential impact of the model by building mock interfaces that simulate information controls. Using students as research participants, data should be gathered to, among other things, determine student perceptions of such controls, how perceptions fluctuate based on data and information type and source, and test student reactions to various messages from institutions justifying data and information uses along with default settings. Additionally, systems developers should investigate the technical construction of existing institutional identity layers to determine whether or not these layers are adaptable to enable student information controls. This work could benefit from multi-institution investigations supported by higher education information technology organizations, such as EDUCAUSE and the Coalition for Networked Information. At the least, if colleges and universities find the model presented in this article to be worthwhile, they should review current systems to determine if they enable student privacy controls, and they should prioritize working with vendors of technologies who build such controls into their applications.
Availability of data and materials
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
Источники:
- http://sakinorva.net/functions
- http://sakinorva.net/functions?lang=en
- http://sakinorva.net/functions?lang
- http://sakinorva.net/functions?lang=fa
- http://sakinorva.net/functions?lang=ru
- http://sakinorva.net/functions?s=09
- http://sakinorva.net/functions?lang=it
- http://sakinorva.net/functions?lang=es
- http://sakinorva.net/functions?lang=vi
- http://sakinorva.net/functions?lang=kr
- http://sakinorva.net/interpret
- http://sakinorva.net/functions?lang=zh
- http://sakinorva.net/functions?lang=tr
- http://sakinorva.net/functions?lang=fr
- http://personalitypath.com/free-enneagram-personality-test/
- http://careerfoundry.com/en/blog/data-analytics/how-to-interview-for-a-data-analyst-role-questions-and-answers/
- http://iterationinsights.com/article/communicating-with-data/
- http://hackr.io/blog/what-is-data-analysis-methods-techniques-tools
- http://www.guru99.com/what-is-data-analysis.html
- http://www.nytimes.com/interactive/2019/12/19/opinion/location-tracking-cell-phone.html
- http://www.smartrecruiters.com/resources/gdpr-recruiting/recruitment-gdpr-faq/
- http://networkingfunda.com/prepare-data-for-exploration-coursera-quiz-answers/
- http://networkingfunda.com/excel-fundamentals-for-data-analysis-coursera-quiz-answers/
- http://educationaltechnologyjournal.springeropen.com/articles/10.1186/s41239-019-0155-0
















