Read Download Psiphon 3 Free For Pc - For Mobile
Skip to content

Psiphon Pro 3.165 Free Download

Psiphon Pro 3.165 Free Download

Download Psiphon 3 App for PC windows 10/7/8.1/8 or XP laptops from here Download YouTube Vanced APK to enjoy ad-free videos on YouTube. Psiphon free download: Privacy protection software for Windows. Safe PC download for Windows 32-bit and 64-bit, latest version. Free Psiphon APK Apk Latest Download For PC Psiphon 3.165 is available to all software users as a free download for Windows. As an open source project.

Thematic video

Psiphon Pro Internet Vpn Grátis Ilimitada!! Como Baixar e Instalar! p − 1 (q divides p − 1). The parameters p, q, and g are global parameters. The TGC also generates its private key skT GC and public key pkT GC = g skT GC mod p. A tag is a 4-tuple {A = pkxB = Lx Psiphon Pro 3.165 Free Download, C = pktag,r = g sktag,r mod p, D = a = g z mod p} with elements: A = pkx is the public key for the item, B = Lx = {id = H(x), tagno, License}skx is a license signed with the secret key for the item, C = pktag,r = g sktag,r mod p is the one time public key for the reseller r and tag tag, and D = a = g z mod p is the commitment value used in the zero knowledge proof of knowledge of the one time private key for the reseller. The license B = Lx = {id = H(x), tagno, License}skx contains the identity of the item id = H(x) and the unique tag number. The identity of the item id = H(x) is calculated using a well known hash function H. In this paper, we assume that an item can be uniquely identified by it’s hash value. To prevent the TGC being able to link actions done by a single reseller together, the reseller will use a separate one time private and public key pair for every tag. We denote this one time tag key as public key pktag,r and secret key sktag,r for reseller r and tag tag. 2.3

Stage 1 - Supplier Generating Tag with TGC

Before an item tag may be generated a one time registration phase must be completed. The supplier calculates the identity of the item id = H(x) and a public (pkx ) and secret (skx ) key for the item and registers the item and public key with the TGC. The TGC may convince itself with out of band checks that the party registering the item is indeed the rights owner. Where suppliers wish to remain anonymous, registration messages are sent via an anonymous communication channel. The TGC cannot verify ownership due to the anonymous channel and uses a first-in first-registered default. Anonymous channels are shown as the dotted lines in Figures 1 and 2. The generation of a new tag for an item by the supplier takes place in the six steps shown in Figure 1, specifically: (1) The reseller sends a purchase request to the supplier containing the identity of the item they wish to purchase id = H(x), the one time public key for the tag pktag,rand a commitment value ar = g zr mod p; (2) the supplier then creates a signed tag request containing the


B. Palmer, K. Bubendorfer, and I. Welch




1) id=H(x), pk tag,r, a r 2) L x={H(x),tagno,License x}skx 3) {L x, pk tag,r, ar}skx 4) tag={pk x,L x,pk tag,r,a r}skTGC 5) tag 6) tag

Fig. 1. Supplier Generating Tag with TGC

license Lx for the item signed using skxthe one time public key pktag,rand the commitment value ar all signed by skx ; (3) the supplier sends the signed tag request to the TGC; (4) the TGC checks the tag number contained in the signed license tagno has not been used for this item before and then constructs and signs tag = {pkxLxpktag,rar }skT GC ; (5) the tag is now sent from the TGC to the supplier; and (6) the supplier passes it on to the reseller. The reseller checks the tag has been signed by the TGC, that the license is for the correct item, and that the tag contains the correct one time public key and commitment value. 2.4

Stage 2 - Reseller Instantiating Tag with TGC

The instantiation by a reseller of a new tag takes place in the seven steps shown in Figure 2, specifically: (1) the customer sends a purchase request comprised of the identity of the item they wish to purchase, the one time public key for Customer



1) id=H(x), pktag2,c, ac tag={pkx,Lx,pktag,r,ar}skTGC 2) tag, pktag2,c, ac 3) c r = zr + csktag,r mod q 4) {r, pktag2,c, ac}sktag,r 5) If gr = arsktag,rc generate tagc={pkx,Lx,pk tag2,c,ac}skTGC 6) tagc 7) tagc

Fig. 2. Reseller Passing Tag to Customer

Anonymously Establishing Digital Provenance


the tag pktag,cand a commitment value ac ; (2) the reseller sends the one time public key for the tag pktag,c and the commitment ac to the TGC along with their tag for the item; (3) the TGC then checks that the one time public key has not previously been used for this item and tag number, the TGC and reseller then take part in a zero knowledge proof of knowledge of a discrete logarithm [2] where the prover (reseller) proves to the verifier (TGC) that they know the value sktag,r such that pktag,r = g sktag,r using the commitment value ar = g zr (from the tag) and the TGC sends the challenge value c to the reseller; (4) the reseller calculates r = zr + cskr mod q and sends {r, pktag2,cac }sktag,r to the TGC; (5) c the TGC checks that the message is signed using sktag,r and that g r = ar pktag,r r zr +csktag,r mod q zr csktag,r c as g = g =g g = ar pktag,r. The response value r, pktag2,cac is signed using the one time secret key for the tag sktag,r to prove to the TGC that the reseller owns this tag. The TGC checks the tag sent to it by the reseller was signed using its private key skT GC and the values pktag2,cac and then uses the zero knowledge proof to detect if the tag has been replayed. If the tag has not been submitted to the TGC by a reseller before, the TGC saves the identity of the item H(x), the public key pktag,rthe commitment ar and the proof transcript c, r of the zero knowledge proof. If this tag is used a second time, the TGC will have two proof transcripts c1r1 and c2r2 that have been used with the same commitment value ar. The TGC can then extract the one time secret key sktag,r by computing g r1 /g r2 = c1 c2 c1 −c2 g zr pktag,r /g zr pktag,r = pktag,r and logg pktag,r = r1 − r2 /c1 − c2 = sktag,r The TGC can prove that the reseller has replayed the tag to any third party by presenting the two proof transcripts. If the TGC does not detect replay, it generates a new tag that contains the public key pktag,c and commitment value ac of the customer or second reseller. This tag is then sent to the reseller (6) and then the reseller forwards it to the customer (7). The customer then checks that the tag was signed by the TGC and that the license Lx is valid and signed by skx. This step can be repeated by the customer to resell this item to another party.


Security Analysis

Spoofing: An adversary could try and spoof the supplier in one of two ways. It could try and forge a request from the supplier to create a new license or register the item with the TGC. Forging a request for the supplier to the TGC is equivalent to the fabrication attack described below. To prevent an adversary in control of the network from modifying the registration message it is Psiphon Pro 3.165 Free Download using the public key of the TGC pkT GC. The checks done by the TGC when a new item is registered should prevent an adversary from being able to register an item they do not control. Fabrication: If the TGC receives a message signed by skx requesting a new tag that was not sent by the supplier, then either the adversary knows skxor the adversary has been able to forge a message signed by skxor the adversary is replying a previously sent message. If the adversary replays a previously sent


B. Palmer, K. Bubendorfer, and I. Welch

message, the TGC will not generate a new tag as the tag number for the item has already been used. The chance of an adversary being able to forge a message signed by skx is negligible due to the signature scheme used. Cloning: If an adversary has run the protocol twice with the same input tag and produced two different Psiphon Pro 3.165 Free Download tag1 and tag2 with different one time keys pk1 and pk2 and different commitments a1 and a2 where tag, tag1and tag2 are all signed by the TGC then either the TGC has generated two tags or the reseller has tampered with the input tag to change the one time key or commitment value. The TGC will not generate two tags as it will be able to detect replay with two separate runs of the zero knowledge proof with the same one time key pktag and commitment atag and different challenge values c1 and c2. The chance of the reseller being able to tamper with the input tag tag to change the one time key or commitment value is negligible due to the signature scheme used. Network Sniffing: The adversary could intercept a tag that has already been generated by the TGC and send it to a customer or try and generate a new tag from the tag they have intercepted. In the first case, the chance of the tag containing the correct one time key pkc and commitment value ac chosen by the customer is negligible. In the second case, the adversary would have to be able to generate the message {r, pktag2,cac }sktag,r and send it to the TGC as part of the zero knowledge proof. If the adversary does not know the secret key sktag,r then the chance of it being able to create the message is negligible.


Anonymity and Verification of the TGC

We cannot always assume that our trusted TGC is trustworthy because it maybe subverted by the owners or third-parties through bribery, seizure or being compromised by attackers. An un-trustworthy TGC could be used to reveal the identify of parties using the TGC or allow violation of the security properties provided by our protocol. Identity can be protected by allowing communication with the TGC via an anonymity service such as TOR [3]. This would prevent even the TGC from knowing the identity of the suppliers and resellers limiting the amount of information it could maliciously reveal. A downside would be that selective revelation of identity could not be easily provided. Verification that the TGC faithfully implements the security protocol can be achieved by requiring the TGC to publish all its actions to a public bulletin board. When a customer gets sent a tag from the reseller, it verifies that the tag has been correctly generated by the TGC by checking the operations the TGC has done to generate the tag. Psiphon Pro 3.165 Free Download also follows the actions of the TGC on the tag that the reseller had before generating the tag for the customer and so on down the chain of resellers until it reaches the initial tag creation by the supplier. Privacy is maintained because the only information leaked to the customer is the number of resellers the tag has passed through from the supplier to it.

Anonymously Establishing Digital Provenance




The Tagged Transaction protocol makes use of digital signatures and zero knowledge proofs. We examine the computational complexity of the Tagged Transaction protocol based on the number of modular operations in Table 1 where n is the complexity of modular exponentiation and m is the complexity of modular division. This table does not take in to account any extra computations required to verify the actions of the TGC. The Tagged Transaction protocol relies on the use of the TGC to generate and sign tags. As the actions of the TGC can be verified, there is no requirement for the TGC to be operated by a trusted party and we envisage many TGCs operating. The supplier for the item can select a TGC when it first generates the tag for an item based on previous relationships, results of verification of the TGCs past actions, availability, or some other metric. Table 1. Complexity of Operations in the Tagged Transaction Protocol Operation Customer Reseller Supplier TGC Total Generating Tag 5n 5n + 2m 7n + m 17n + 3m Reseller Generating Tag 8n 7n + m 9n + m 24n + 2m Verifying Tag 6n 6n


Related Work

Cover Image of Download Dice Twice: 2048 Puzzle 1.1.1 APK

ID: com.eliasku.dice_twice_2048_domino_puzzle

  • Author:

    Digiduck Games
  • Version:


  • Update on:


Facebook Browser DO NOT support download.
Click to open Chrome Browser.

Version code: 15, Architecture armeabi-v7a, Android 5+

Warranty safe installation, no addition ads or malware Dice Twice: 2048 Puzzle apk safe verified
Love our service? Share with your friends

Other variants APKs:

Architecture armeabi-v7a, Android 9+
Code 15, .ZIP, 2.84MB Architecture arm64-v8a, Android 9+
Code 15, .ZIP, 3.02MB

This app has access to:

  • Network:

    View network connections:
    • Allows the app to view information about network connections such as which networks exist and are connected.
    Full network access:
    • Allows the app to create network sockets and use custom network protocols. The browser and other applications provide means to send data to the internet, so this permission is not required to send data to the internet.

  • App information:

    Run at startup:
    • Allows the app to have itself started as soon as the system has finished booting. This can make it take longer to start the device and allow the app to slow down the overall device by always running.

  • Effects on battery life:

    Control vibration:
    ytd video downloader review - Crack Key For U • Allows the app to control the vibrator.
    Prevent device from sleeping:
    • Allows the app to prevent the device from going to sleep.

  • Category

  • Get it on:

    Go Google Play com.eliasku.dice_twice_2048_domino_puzzle
  • Requirements:

    Android 4.1+

Dice Twice: 2048 Puzzle 1.1.1 APK for Android 4.1+

Version 1.1.1 for Android 4.1+
Update on 2021-02-20
File size 3.165.666 bytes
Permissions view permissions
What's newBug fixes

Versions history:

  • 1. LATEST. Dice Twice: 2048 Puzzle 1.3.7 APK (2021-10-19, 3.97MB)
  • 2. Dice Twice: 2048 Puzzle 1.3.3 APK (2021-10-09, 4.17MB)
  • 3. Dice Twice: 2048 Puzzle 1.3.1 APK (2021-09-22, 5.49MB)
  • Psiphon Pro 3.165 Free Download 4. Dice Twice: 2048 Puzzle 1.1.13 APK (2021-04-20, 4.96MB)
  • 5. Dice Twice: 2048 Puzzle 1.1.10 APK (2021-04-12, 3.04MB)
  • 6. Dice Twice: 2048 Puzzle 1.1.7 APK (2021-04-01, 3.02MB)
  • 7. 5 Way Free Activate Windows 7 Without Product key Dice Twice: 2048 Puzzle 1.1.1 APK (2021-02-20, 3.02MB)
  • 8. Dice Twice: 2048 Puzzle 1.1.0 APK (2020-12-14, 3.17MB)
  • 9. Dice Twice: 2048 Puzzle 1.0.9 APK (2020-12-02, 3.14MB)
  • 10. Process Lasso Pro 10.2 Crack For Windows Free Dice Twice: 2048 Puzzle 1.0.5 APK (2020-11-28, 3.10MB)
  • 11. Dice Twice: 2048 Puzzle 1.0.2 APK (2019-11-20, 2.04MB)

Show more

We use cookies and other technologies on this website to enhance your user experience.
By clicking any link on this page you are allow us to use them.

I Agree



 · We have tested Psiphon 3 3.166 against malware with several different programs. We certify that this program is clean of viruses, malware and trojans.

Access Everything on the Open Internet with Psiphon Millions of people in over 200 countries around the planet are already connecting to the Internet using Psiphon, the …


 · This download is licensed as freeware for the Windows (32-bit and 64-bit) operating system on a laptop or desktop PC from vpn and proxy software without restrictions. Psiphon 3.165 is available to all software users as a free download for Windows.

Psiphon Pro gives you unprecedented access to your favourite news broadcast or social media platforms. By its nature, Psiphon Pro also protects you when accessing WiFi hotspots by creating a secure, private tunnel between you and the Internet. Psiphon Pro is the best VPN tool for …

Secure and high-performance, with over 150,000,000 downloads. Secure and high-performance, Psiphon provides open access to the uncensored internet for millions of people around the world, with 150,000,000+ downloads.

Download Psiphon3 Free Download تحميل برنامج [ File Found ] File Info. File Name: Psiphon3: File Type: rar: File Size: 1 MB: File Description: Psiphon is a freeware application that gives you the access to the blocked sites in some countries: Upload Date: June 1, 2014: Advertisment: We only host freeware applications and media and do NOT host any file with copyrights. All Files .

تحميل برنامج سايفون 3 برو – تنزيل Psiphon3 Pro برنامج سايفون أفضل برنامج Vpn لفتح المواقع المحجوبة LinkedIn

Psiphon for Windows stores some data locally under the user's profile. It is located at a path like C:\Users\YourName\AppData\Roaming\Psiphon3; or, more generally, %APPDATA%\Psiphon3. You can delete the local data by going to that path in Windows Explorer and deleting it, or entering this command at a command prompt: rmdir /s "%APPDATA%\Psiphon3".


 · Free internet 2020 Using Psiphon3 for PC - Duration: 4:36. BillyTechGaming Recommended for you


 · This was the only way I found to use my old gamepad (thrustmaster dual analog 2) with games that require xinput. I have used x360ce.App- succesfully in windows, in linux with wine 4.0, and even with some unity game (even when the compatibility list says they are not compatible).

Psiphon - Apps on Google Play

xk =xj }

where – yi is the dependent variable of loan i; – Pi is the set of recent loans of loan i;  – is an aggregation operator that calculates the arithmetic mean of the argument over a specified set of loans; – xi is an auxiliary property of loan i, which can appear to build subsets of Pi to test Hypotheses 5–7. In this case, we also have to include the property as a control variable or—for multinomial properties—as fixed effect to avoid spurious results from an unbalanced sample; – ai is the amount of loan i, for which we have to control as people might disclose more personal data if they ask for more money; – f (ti ) is a function generating time dummies to control for low frequency fluctuations over time (annual dummies unless otherwise stated); – β = (β0. . ) is a coefficient vector that can be estimated with ordinary least  squares to minimize the squared residuals εii. e., i ε2i → min. We estimate several variants of this general model by including terms for different dependent variables y and auxiliary properties x. We report the estimated coefficients along with empirical standard errors and a significance level for the two-sided test of the null hypothesis βk = 0. For the interpretation, estimates βˆ1 and βˆ2 are of most interest. If βˆ1 is positive and significant, the dependent variable can be explained by the aggregate realizations of the dependent variable in the respective sets of recent loans. This indicates the existence of peer effects. If both βˆ1 and βˆ2 are positive and significant, there is evidence for additional explanatory power of loans in the subset sharing the auxiliary property, hence peer effects are stronger if the loans share a property. Hypotheses 2 and 3 concern binary indicators as dependent variables. In these cases, we use logistic regression analyses to regress the logit-transformed odds ratio of the dependent variable on the same set of predictor terms as above. These coefficients are estimated with the maximum likelihood method.


3 3.1

R. B¨ ohme and S. P¨ otzsch

Results Length of Description

Table 1 shows the estimated coefficients for the regression models specified to test Hypothesis 1 in conjunction with Hypotheses 5–7. Each specification (A– G) is reported in a column. The dependent variable is given by the log of the number of characters in all text field descriptions provided by the loan applicant. For brevity, we do not report all coefficients for multinomial controls and fixed effects, but indicate whether the respective terms are included by the label “yes”. Model A estimates a plain peer effect. The term for all recent loans (i. e., βˆ1 ) is positive and highly significant. With a coefficient value of almost 0.9, a change by one order of magnitude in the lengths of description of recent loans translates to about 2.5 times longer descriptions for the average new loan application. Most likely, this specification overestimates the size of the peer effect because other relevant predictors of the verbosity are not controlled for. Model B resolves this by introducing a control for the loan amount (log transformed) and time dummies to capture long-term trends. The relevant coefficient remains positive and highly significant. Therefore our data supports Hypothesis 1. The coefficient for the loan amount is positive and significant, too. People tend to write more if they ask for more money. Models C–F test Hypotheses 5–7. We do find support for Hypothesis 5. Recent loans of the same category have additional explanatory power in predicting the length of description even if base effects of individual categories are controlled for (model C). The picture is more mixed for Hypothesis 6, which can only be supported if the same credit grade is taken as indicator of social proximity (model D). Loan applicants apparently do not prefer peers of the same sex when mimicking their disclosure behavior (model E). This is most likely not an artifact of an unbalanced sample, as about one quarter of all loan are requested by women. Nor do we find support for Hypothesis 7: borrowers do not seem to copy from successful recent applications more often than from pending or unsuccessful applications (model F). Model G serves as robustness check to see if any coefficient changes its sign when all predictors are included at the same time. This does not happen, which indicates robustness. However, some estimates lose statistical significance presumably because of collinearity between the higher number of predictors. Overall, the models explain between 8 and 14 % of the variance in the dependent variable. Note that the bulk of explained variance can be attributed to the direct peer effects (model A) and not, unlike in many other field studies, to the inclusion of fixed effects. The number of cases varies somewhat between models because missing values appear if subsets of recent loans turn out to be empty. 3.2

Provision of a Picture

Table 2 shows the estimated coefficients for the logistic regression models specified to test Hypothesis 2 in conjunction with Hypotheses 5–7. We take the odds

Peer Effects in Voluntary Disclosure of Personal Data


Table 1. Factors influencing the verbosity of descriptions Dependent variable: length of description (log) Terms all recent loans

A 0.88 *** (0.047)

B 0.63 *** (0.078)

recent loans with. . .

C 0.53 *** (0.088)

D 0.53 *** (0.087)



0.60 *** (0.126)



0.07 † (0.038)


0.12 **

same credit grade

0.15 ***


borrower same sex






complete funding controls:

0.21 ***

log amount


fixed effects: category credit grade gender complete funding annual

(number of cases) (adjusted R2 [%])


0.08 *

same category



0.58 **

0.18 *** (0.023)

0.21 *** (0.022)

0.21 *** (0.021)





0.21 ***


yes yes

yes yes yes yes yes

yes yes yes yes 0.67 *




0.19 ***















3784 8.5

3784 11.1

3553 12.9

3750 11.5

3777 11.1

3780 11.0

3531 13.4

Std. errors in brackets; Disk Drill Pro 4.4.356 Crack & Activation Code Free Download 2021. significance: † p < 0.1, * p < 0.05,


p < 0.01,


p < 0.001

that a loan applicant has uploaded a custom picture as dependent variable. Overall, 22.6 % of all loan applications contain a custom project picture. Model H tests Hypothesis 2 and finds highly significant peer effects. The interpretation of coefficients is less straightforward for logistic models. A value of βˆ1 = 2.3 denotes that the odds of providing a custom picture are exp(βˆ1 ) ≈ 10 times higher if all recent loans contained a custom picture than if none of the recent loans contained one. Model I finds weakly significant support for the strict test of Hypothesis 5 including category fixed effects. The effect remains robust and (unsurprisingly) stronger if the category fixed effects are omitted (model J). This demonstrates the importance of including the respective fixed effects, as different offsets between categories (e. g., positive sign for cars & motorcycles; negative sign for debt restructuring) otherwise feed into the “same category” terms and overstate the true effect size. Models K–M test Hypotheses 6 and 7 and do not find support for any of them. The provision of a custom picture is probably the crudest indicators for several reasons. Its binary scale is susceptible to noise. The provisioning of a picture largely depends on the external factor whether a suitable picture is available. And the indicator does not differentiated between pictures of different information


R. B¨ ohme and S. P¨ otzsch Table 2. Factors influencing the decision to publish a project picture Dependent variable: prob. of custom picture (logit link)

Terms all recent loans

H 2.27 *** (0.421)

recent loans with. . .

I 1.90 *** (0.530)

0.34 †

same category


J 1.48 ** (0.519)

K 2.20 *** (0.505)



1.80 ** (0.669)

0.82 *** −0.07

0.01 (0.253)






complete funding

fixed effects: category credit grade gender complete funding annual (constant) (number of cases)


0.14 **

0.17 ** (0.055)

0.14 ** (0.052)

0.15 ** (0.050)

0.14 ** (0.050)

0.15 **

yes yes

yes yes yes yes yes

yes yes



0.19 ** (0.058)









0.34 (0.237)


2.36 †


borrower same sex

log amount



same credit grade



2.36 *

−2.94 *** −3.43 *** −2.98 *** −3.25 *** −2.95 *** −3.01 *** −3.91 ***













Std. errors in brackets; stat. significance: † p < 0.1, * p < 0.05,


p < 0.01,




p < 0.001

content and quality. Nevertheless we decided to report the results because they combine an objective indicator with the same logistic regression model that is used in the following to estimate peer effects for personal data disclosure. It therefore serves as reference to better interpret the results in Table 3. 3.3

Personal Data Disclosure by Type

Length of description and provision of a picture are imprecise indicators because they do not measure any semantic. A high value could either reflect verbosity or extensive disclosure of personal data. Our hand-coded indicators of data disclosure by type do not share this limitation. Due to resource constraints, data of this granularity is only available for 1558 fully funded loans between November 2008 and January 2010. Since partly funded and unsuccessful loans were not included in the coding task, we are unable to test Hypothesis 7 for subjective indicators. This is not a big shortcoming as this hypothesis has been refuted on the basis of objective indicators anyway. Table 3 shows selected estimated coefficients for the logistic regression models specified to test Hypothesis 3 in conjunction with Hypotheses 5 and 6. Unlike in the previous tables, predictors appear in columns and different dependent

Peer Effects in Voluntary Disclosure of Personal Data


Table 3. Strength of peer effects on disclosure of personal data (by type) Overall Type of personal data


Frequency in recent loans all 2.76 ***

category credit grade 0.18

0.73 *


62.3 %


Financial situation

33.9 %


Family and partner

31.4 %


Hobbies and memberships

29.3 %


Housing situation

19.1 %






10.8 %







Name or first name

5.7 %


Health situation

3.5 %


Contact details

2.6 %





Special skills & qualifications

1.9 %


3.19 *** −0.13

2.62 ***

8.76 ** 18.14 ***

17.69 **


0.31 (0.273)




0.75 * (0.311)

0.79 * (0.309)

amount 0.28 *** (0.080)

0.20 * (0.083)

0.11 (0.084)















0.34 ** (0.107)













0.20 (0.234)





Std. errors in brackets; stat. significance: † p < 0.1, * p < 0.05, ** p < 0.01, *** p < 0.001. Summary of logit models for the probability of disclosing at least one item of the respective type. N = 1558 fully funded loans between Nov 2008 and Jan 2010.

variables in rows. For each row, the dependent variable is defined by the odds that a loan application contains at least one data item of a specific type (including sub-types). The types are ordered by decreasing marginal probability, as reported in the second column. Positive and highly significant peer effects Little Snitch 5.1.2 Crack + License Key 2021 [Latest Version] be found for 6 out of 10 types of personal data. Quite surprisingly, while Hypothesis 5 could be retained for objective indicators, it seems that recent loans with the same category have no additional impact on the decision to disclose data of a particular type. Therefore we refute Hypothesis 5 for this indicator. By contrast, recent loan applications with the same credit grade have significant influence in predicting the disclosure of personal data of the three types with the highest marginal probability of disclosure. With 3 out of 10 significant at the 5 % level—1 out of 20 is the expected value under the null hypothesis—we have to acknowledge weak support for Hypothesis 6. Note that we have also estimated models with matching gender, but none of the relevant coefficients was signifiant. A side-observation in Table 3 is that the disclosure of certain types of personal data, notably data about profession, financial and housing situation, correlated positively with the loan amount, whereas others types of data appear independent of the size of the loan.


R. B¨ ohme and S. P¨ otzsch Table 4. Factors influencing the identifiability of a borrower Dependent variable: identifiability index Terms all recent loans

O 0.61 *** (0.127)

recent loans with. . same category

P 0.56 *** (0.143)

Q 0.48 *** (0.141)


0.01 (0.060)

0.13 * (0.060)

fixed effects: category credit grade gender annual (constant) (number of cases) (adjusted R2 [%])


0.13 * (0.061)


borrower same sex

log amount

0.37 (0.252)


same credit grade




0.85 *** (0.205)

0.51 * (0.228)

0.93 *** (0.207)

yes yes yes −3.64

yes −0.77

yes −4.67 *

0.59 * (0.232)

yes yes yes yes −1.49





1659 4.2

1565 3.9

1651 4.3

1554 4.2


For a sample of 1663 cases, we have sufficient observations of the identifability index (see Sect. 2.2). Table 4 shows the estimated coefficients for the regression models specified to test Hypothesis 4 in conjunction with Hypotheses 5 and 6. Again, we find highly significant evidence for peer effects (model O). This supports Hypothesis 4. As in Sect. 3.3 above, Hypothesis 5 is not supported (model P), whereas Hypothesis 6 can be retained at least for the credit grade as matching property (model Q). Compared to the length of description as dependent variable (see Tab. 1), the ratio of explained variance is lower, but this is not uncommon for noisy measurements from field data that exhibit a lot of unexplained heterogeneity over an extended period of data collection.

4 4.1

Discussion Summary and Interpretation

To the best of our knowledge, this paper is the first to report evidence for peer effects in voluntary disclosure of personal data using field data. The existence of this effect has been conjectured before [6, 26], however so far without evidence. Table 5 summarizes the results for all our hypotheses.

Peer Effects in Voluntary Disclosure of Personal Data


Table 5. Summary of hypothesis tests plain

in conjunction with. . Hypothesis 5

Hypothesis 6

Hypothesis 7

Hypothesis 1



partly retained


Hypothesis 2


weak support



Hypothesis 3

partly retained


weak support

not tested

Hypothesis 4



partly retained

not tested

More specifically, we could find plain peer effects for all four indicators of disclosure—objective proxies, such as the length of description or the provision of custom pictures, and subjective ratings alike. The result pattern is less clear for our additional hypotheses on factors that potentially reinforce peer effects and it is probably too early to generalize from the specific operationalizations made in the context of our data source. Peer effects imply that decisions to disclose personal data are driven by observed behavior of others. Understanding peer effects is relevant in general, because over time, peer effects can create self-reinforcing dynamics that may change the social attitude towards privacy: “if everybody discloses his or her full life on the Internet, it can’t be wrong to disclose my information as well.” Privacyfriendly interface design and increased user awareness might attenuate this dynamic, but we remain skeptical if those cues can ever be strong enough to reverse dynamics of descriptive social norms, not to mention the missing incentives of market participants to implement effective cues. A specific relevance of peer effects in social lending emerges for platform designers in the financial industry. If data disclosure is merely a reaction to previous loans, then the fact whether data has been disclosed loses its value as a signal to lenders. A platform providing a better separation of roles, with distinct privileges to access loan applications (e. g., exclusively to registered lenders with positive account balance), could not only increase borrower privacy, but also make data disclosure decisions more individual and thus signal more valuable information to lenders. This might resolve the apparent puzzle that more disclosure does not always translate into better credit conditions, even after controlling for all available hard information [18, 24]. 4.2

Related Work

Aside from the theoretical and empirical works referenced in the introduction to narrow down the research question, the following empirical studies are related most closely to this paper. Gross and Acquisti [26] use field data to measure personal data disclosure and identifiability in online social networks. Their approach is quantitative, but largely explorative with emphasis on descriptive statistics. The authors speculate about peer pressure and herding in their conclusions.


R. B¨ ohme and S. P¨ otzsch

Acquisti, John, and Loewenstein [11] report positive influence of descriptive social norms on disclosure of sensitive personal data. In their laboratory experiments, participants reacted differently if they were told that previous participants revealed sensitive data. In contrast to the case of online social lending, the subjects had to answer closed-form questions and could not see the disclosed data of others. They had to believe in the abstract information instead. Barak and Gluck-Ofri [27] conducted a self-disclosure experiment in the framework of an online forum. They report peer effects from previous posts. In contrast to our study, the dependent variable is less tailored to privacy. Consistent with the tradition in this literature, the authors also analyze disclosure of feelings and thoughts. Empirical research on online social lending also exists in the economic literature (see [14] for a review). Typical research questions concern market efficiency, signs of discrimination, or the influence of ties in the social networks between borrowers and lenders, respectively. Occasionally, lengths of description is used as an effort measure in these studies [28]. We are not aware of other attempts than our own previous works [18, 24] to quantify personal data disclosure Psiphon Pro 3.165 Free Download test theories of privacy behavior empirically with data from online social lending. 4.3

Limitations and Future Work

Compared to laboratory experiments, field studies suffer from limited control. In particular, disclosure of false information remains unnoticed in this study (a problem shared with some experiments, e. g., [11]). Our main results are robust to changes in the window size and the use of quarterly instead of annual time dummies, but refinements are needed to check the robustness against different definitions of the peer group. New research efforts are directed to replicate the study in other contexts. A more May 31, 2021 - Free Activators goal is to follow the approach in [29] and unify the collection of individual behavioral effects of data disclosure into a more general theory of planned behavior, which spans effects of attitude, social norms, and self-efficacy. Acknowledgements. This paper has benefited from discussions with Jens Schade and Nicola Jentzsch. We thank our coders Ulrike Jani, Kristina Sch¨ afer, and Michael Sauer for their excellent research assistance.

References 1. Hui, K.L., Png, I.P.: The economics of privacy. In: Hendershott, T.J. (ed.) Handbooks in Information System and Economics, vol. 1, pp. 471–493. Elsevier (2006) 2. Posner, R.A.: The economics of privacy. American Economic Review 71(2), 405– 409 (1981) 3. Calzolari, G., Pavan, A.: On the optimality of privacy in sequential contracting. Journal of Economic Theory 130(1), 168–204 (2006) 4. Jentzsch, N.: A welfare analysis of secondary use of personal data. In: Workshop on the Economics of Information Security (WEIS), Harvard University (2010) 5. Margulis, S.T.: Privacy as a social issue and behavioral concept. Journal of Social Issues 59(2), 243–261 (2003) 6. Acquisti, A., Grossklags, J.: Privacy and rationality in individual decision making. IEEE Security and Privacy 3(1), 26–33 (2005)

Peer Effects in Voluntary Disclosure of Personal Data


7. Berendt, B., G¨ unther, O., Spiekermann, S.: Privacy in e-commerce: Stated preferences vs. actual behavior. Communications of the ACM 48(4), 101–106 (2005) 8. Norberg, P.A., Horne, D.R., Horne, D.A.: The privacy paradox: Personal information disclosure intentions versus behaviors. Journal of Consumer Affairs 41(1), 100–126 (2007) 9. Acquisti, A.: Privacy in electronic commerce and the economics of immediate gratification. In: Electronic Commerce (ACM EC), New York, NY, pp. 21–29 (2004) 10. Grossklags, J., Acquisti, A.: When 25 cents is too much: An experiment on willingness-to-sell and willingness-to-protect personal information. In: Workshop on the Economics of Information Security (WEIS), Carnegie Mellon University (2007) 11. Acquisti, A., John, L., Loewenstein, G.: The impact of relative standards on concern about privacy. In: Workshop on the Economics of Information Security (WEIS), University College London (2009) 12. Brandimarte, L., Acquisti, A., Loewenstein, G.: Misplaced confidences: Privacy and the control paradox. In: Workshop on the Economics of Information Security (WEIS), Harvard University (2010) 13. John, L., Acquisti, A., Loewenstein, G.: Strangers on a plane: Context-dependent willingness to divulge personal information. Consumer Research 37(2) (2011) 14. Berger, S., Gleisner, F.: Emergence of financial intermediaries in electronic markets: The case of online P2P lending. BuR – Business Research 2(1), 39–65 (2009) 15. Chen, N., Ghosh, A., Lambert, N.: Social lending. In: Electronic Commerce (ACM EC), Palo Alto, CA, pp. 335–344. ACM Press (2009) 16. Stiglitz, J.E.: Credit rationing in markets with imperfect information. American Economic Review 71(3), 393–410 (1981) 17. Jentzsch, N.: Financial Privacy. Springer, Heidelberg (2007) 18. B¨ ohme, R., P¨ otzsch, S.: Privacy in online social lending. In: AAAI Spring Symposium on Intelligent Privacy Management, Palo Alto, CA, pp. 23–28 (2010) 19. Bikhchandani, S., Sharma, S.: Herd behavior in financial markets. IMF Staff Papers 47(3), 279–310 (2001) 20. Cialdini, R.B., Kallgren, C.A., Reno, R.R.: A focus theory of normative conduct. Advances in Experimental Social Psychology 24, 201–234 (1991) 21. Chaiken, S., Yaacov, T.: Dual-process Theories in Social Psychology. Guilford Press, New York (1999) 22. B¨ ohme, R., K¨ opsell, S.: Trained to accept? A field experiment on consent dialogs. In: Human Factors in Computing Systems (ACM CHI), pp. 2403–2406 (2010) 23. Derlega, V.J., Berg, J.H. (eds.): Self-Disclosure. Theory, Research, and Therapy. Plenum Press, New York (1987) 24. P¨ otzsch, S., B¨ ohme, R.: The Role of Soft Information in Trust Building: Evidence from Online Social Lending. In: Acquisti, A., Smith, S.W., Sadeghi, A.-R. (eds.) TRUST 2010. LNCS, vol. 6101, pp. 381–395. Springer, Heidelberg (2010) 25. Holsti, O.R.: Content analysis for the social sciences and humanities. AddisonWesley (1969) 26. Gross, R., Acquisti, A.: Information revelation and privacy in online social networks. In: Privacy in the Electronic Society (ACM WPES), pp. 71–80 (2005) 27. Barak, A., Gluck-Ofri, O.: Degree and reciprocity of self-disclosure in online forums. CyberPsychology & Behavior 10(3), 407–417 (2007) 28. Herzenstein, M., Andrews, R., Dholakia, U., Lyandres, E.: The democratization of personal consumer loans? Determinants of Success in Online Peer-to-Peer Lending Communities (June 2008) 29. Yao, M.Z., Linz, D.G.: Predicting self-protections of online privacy. CyberPsychology & Behavior 11(5), 615–617 (2008)

It’s All about the Benjamins: An Empirical Study on Incentivizing Users to Ignore Security Advice Nicolas Christin1Serge Egelman2, Timothy Vidas3and Jens Grossklags4 2

1 INI/CyLab, Carnegie Mellon University National Institute of Standards and Technology 3 ECE/CyLab, Carnegie Mellon University 4 IST, Pennsylvania State University [email protected][email protected][email protected][email protected]

Abstract. We examine the cost for an attacker to pay users to execute arbitrary code—potentially malware. We asked users at home to download and run an executable we wrote without being told what it did and without any way of knowing it was harmless. Each week, we increased the payment amount. Our goal was to examine whether users would ignore common security advice—not to run untrusted executables—if there was a direct incentive, and how much this incentive would need to be. We observed that for payments as low as $0.01, 22% of the people who viewed the task ultimately ran our executable. Once increased to $1.00, this proportion increased to 43%. We show that as the price increased, more and more users who understood the risks ultimately ran the code. We conclude that users are generally unopposed to running programs of unknown provenance, so long as their incentives exceed their inconvenience. Keywords: Behavioral Economics, Online Crime, Human Experiments.

1 Introduction Since the early 2000s, Internet criminals have become increasingly financially motivated [22]. Of particular concern is the large number of “bots,” that is, compromised end user machines that can be accessed and controlled remotely by these miscreants. Bots are a security concern, because they are federated in centrally-operated networks (“botnets”) that can carry out various kinds of attacks (e.g., phishing site hosting, spam campaigns, distributed denial of service, etc.) on a large scale with relative ease. Botnet operators usually do not launch these attacks themselves, but instead rent out computing cycles on their bots to other miscreants. Estimates on the total number of bots on the Internet range, for the year 2009, from 6 to 24 million unique hosts [31, 4]. Individual botnets can be comprised of hundreds of thousands of hosts. For instance, researchers managed to hijack the Torpig botnet for a period of ten days, and observed on the order of 180,000 new infections over that period, and 1.2 million unique IP addresses contacting the control servers [30]. G. Danezis (Ed.): FC 2011, LNCS 7035, pp. 16–30, 2012. c Springer-Verlag Berlin Heidelberg 2012 

It’s All about the Benjamins


The scale of the problem, however alarming it may seem, suggests that most Internet users whose computers have been turned into bots either do not seem to notice they have been compromised, or are unable or unwilling to take corrective measures, potentially because the cost of taking action outweighs the perceived benefits. There is evidence (see, e.g., [29]) of hosts infected by multiple pieces of malware, which indicates that a large number of users are not willing to undertake a complete reinstallation of their system until it ceases to be usable. In other words, many users seem to be content ignoring possible security compromises as long as the compromised state does not noticeably impact the performance of the machine. Consequently, bots are likely to be unreliable. For instance, they may frequently crash due to multiple infections of poorly programmed malware. Their economic value to the miscreants using them is in turn fairly low; and indeed, advertised bot prices can be as low as $0.03-$0.04 per bot,1 according to a recent Symantec report [31]. Overall, bot markets are an interesting economic environment: goods are seemingly of low quality, and treachery amongst sellers and buyers is likely rampant [14], as transaction participants are all engaging in illegal commerce. Yet, the large number of bots, and the absence of notable decrease in this number, seem to indicate that bot markets remain relatively thriving. This puzzle is even more complicated by the fact most surveyed Internet users profess a strong desire to have secure systems [5]. In this paper, we demonstrate that, far from being consistent with their stated preferences, in practice, users actually do not attach any significant economic value to the security of their systems. While ignorance could explain this state of affairs, we show that the reality is much worse, as some users readily turn a blind eye to questionable activities occurring on their systems, as long as they can themselves make a modest profit out of it. We describe an experiment that we conducted using Amazon’s Mechanical Turk, where we asked users to download and run our “Distributed Computing Client” for one hour, with little explanation as to what the software actually did,2 in exchange for a nominal payment, ranging from $0.01 to $1.00. We conducted follow-up surveys with the users who ran the software to better understand the demographics of at-risk populations. We use the results of this experiment to answer the following questions: – How much do Internet users value access to their machines? If users who are at risk of getting infected could be compensated for the infection, how much should the compensation be? – Is there a specific population at risk? Do we observe significant demographic skew for certain types of populations in our study? – Are current mitigations effective at dissuading users from running arbitrary programs that they download from the Internet? When software warns users about the dangers of running unknown programs, are they any less likely to continue running those programs? 1 2

In this entire paper, all prices are given in US dollars. That is, the “$” sign denotes US currency. As discussed in Section 3, in reality, the software merely collected running process information and other system statistics, ran down a countdown timer, and reported this information back to us; in our experimental condition, it also prompted users for administrative access to run.


N. Christin et al.

Answering these questions is important because it allows us to quantify how riskseeking people may be, which in turn may help optimize user education and other mitigation strategies. These answers also provide an idea on the upper bound of the value of a bot, and permit us to compare prices set in actual, realized transactions, to advertised prices [14]. Being able to dimension such prices helps us better understand which intervention strategies are likely to be successful; for instance, if a security mitigation costs users one hour of work, while they value access to their machine at a few cents, users are unlikely to adopt the security mitigation [13].

2 Related Work There has recently been a considerable amount of academic research devoted to trying to dimension economic parameters in the realm of online crime. Pioneering works in that realm mostly attempted to measure prices of illegitimately acquired commodities advertised in underground online markets [8, 32]. Industry research is also very active in that field. For instance, Symantec and Verizon issue yearly reports that dimension key economic indicators of the vitality of online criminal activity (see, e.g., [31, 6]). Most of this research, though, relies on passive measurements; advertising channels are monitored, but researchers rarely become party to an actual transaction. As such, the prices monitored and reported may be quite different from the actual sales prices [14]. A few recent papers try to obtain more precise measurements by taking over (or infiltrating) botnets and directly observing, or participating in the transactions [18, 30]. To complicate matters, the reported value of botnets also varies based on perspective. From a prosecution perspective, costs of crimes may be inflated to seek larger sentences. From a botnet operator perspective, risks associated with operation may warrant hosting in countries that have ambiguous or non-existent cybercrime laws. It is the botnet renter’s perspective that we explore. A Psiphon Pro 3.165 Free Download congressional report estimated renting ZookaWare Pro Crack With Activation Key Free entire botnet in 2005 cost $200-$300 per hour [34]. Renting a bot in 2006 reportedly cost $3-$6 [21]. In 2007 studies show an asking cost as high as $2-$10 [8], but the actual price paid have been as low as $0.25 [35]. Recently, prices have been discerned to cost $50 for several thousand bots for a 24-hour period [24]. Other work aims at uncovering relationships between different online crime actors, and calculating economic metrics (e.g., actors, revenue, market volume, etc.) from network measurements. For instance, Christin et al. [7] gathered four years worth of data on an online scam prevalent in Japan, and show that the top eight groups are responsible for more than half of the malicious activity, and that considerable profit (on the order of $100,000 per year) can be amassed for relatively little risk (comparatively small fines, and very light prison sentences). Likewise, Moore and Edelman [23] assessed the economic value of typosquatting domains by gathering a large number of them and evidence some disincentives for advertisement networks to intervene forcefully. Closer to the research on which we report in this paper, Grossklags and Acquisti provide quantitative estimates of how much people value their privacy. They show that most people are likely to give up considerable private information in exchange for $0.25 [11]. Similarly, Good et al. conducted a laboratory study wherein they observed whether participants would install potentially risky software with little functional benefit [10].

It’s All about the Benjamins


Fig. 1. Screenshot of the “Distributed Computing Client.” After subjects clicked a start button, a timer ran for one hour. Once the timer expired, a completion code was displayed. Subjects entered this completion code into Mechanical Turk in order to receive payment.

Participants overwhelmingly decided to affirmatively install peer-to-peer and desktopenhancing software bundled with spyware. Most individuals only showed regret about their actions when presented with much more explicit notice and consent documents in a debriefing session. To most participants, the value extracted from the software (access to free music or screen savers) trumped the potentially dangerous security compromises and privacy invasions they had facilitated. We complement these works by trying to provide a precise measurement of how little it could cost an attacker to have unfettered access to a system with the implicit consent of the owner. We also contribute to a better understanding of behavioral inconsistencies between users’ stated security preferences [5] and their observed actions.

3 Experimental Methodology The key part of our study lies in our experiment, which we ran using Amazon’s Mechanical Turk. The objective was to assess at what price, and under which conditions, would people install an unknown application. In this section, we describe the study environment and two different conditions we created. We analyze our results in Section 4. Mechanical Turk is a marketplace for pairing “workers” with “requesters.” Requesters post tasks that are simple for humans to complete, but difficult to automate (e.g., image tagging, audio transcription, etc.). In return, the requesters pay workers micropayments—on the order of a few cents—for successfully completing a task. However, recently researchers have begun using it as a crowd-sourcing tool for performing studies on human subjects [19]. This is because the demographics of Mechanical Turk do not significantly differ from the demographics of worldwide Internet users [27]. But what about the quality of the resulting data? To collect data for a 2008 study, Jakobsson posted a survey to Mechanical Turk while also hiring a market research firm. He concluded that the data from Mechanical Turk was of the same quality, if not higher [16].


N. Christin et al.

In September of 2010, we created a Mechanical Turk task offering workers the opportunity to “get paid to do nothing.” Only after accepting our task did participants see a detailed description: they would be participating in a research study on the “CMU Distributed Computing Project,” a fictitious project that we created. As part of this, we instructed participants to download a program and run it for an hour (Figure 1). We did not say what the application did. After an hour elapsed, the program displayed a code, which participants could submit to Mechanical Turk in order to claim their payment. Because this study involved human subjects, we required Institutional Review Board (IRB) approval. We could have received a waiver of consent so that we would not be required to inform participants that they were participating in a research study. However, we were curious if—due to the pervasiveness of research tasks on Mechanical Turk— telling participants that this was indeed a research task would be an effective recruitment strategy. Thus, all participants were required to click through a consent form. Beyond the consent form, there was no evidence that they were participating in a research study; all data collection and downloads came from a third-party privately-registered domain, and the task was posted from a personal Mechanical Turk account not linked to an institutional address. No mention of the “CMU Distributed Computing Project” appeared on any CMU websites. Thus, it was completely possible that an adversary had posted a task to trick users into downloading malware under the guise of participating in a research study, using a generic consent form and fictitious project names in furtherance of the ruse. We reposted the task to Mechanical Turk every week for five weeks. We increased the price each subsequent week to examine how our results changed based on the offering price. Thus, the first week we paid participants $0.01, the second week $0.05, the third week $0.10, the fourth week $0.50, and the fifth week $1.00. In order to preserve data independence and the between-subjects nature of our experiment, we informed participants that they could not participate more than once. We enforced this by rejecting results from participants who had already been compensated in a previous week. We further restricted our task to users of Microsoft Windows XP or later (i.e., XP, Vista, or 7).3 In Windows Vista and later, a specific security mitigation is included to dissuade users from executing programs that require administrator-level access: User Account Control (UAC). When a program is executed that requires such onesafe pc cleaner pro key - Free Activators, a prompt is displayed to inform the user that such access is requested, and asks the user if such access should be allowed (Figure 2). To examine the effectiveness of this warning, we created a second between-subjects condition. For each download, there was a 50% chance that the participant would download a version of our software that requested administrator-level access via the application manifest, which in turn would cause the UAC warning to be displayed prior to execution. The main purpose of this experiment was to collect data on the ratio of people who downloaded the application versus viewed the task, and the ratio of people who ran the application versus downloaded the application, as well as how these ratios changed 3

Certain commercial software is identified in this paper in order to specify the experimental procedure adequately. Such identification is not intended to imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it intended to imply that the software identified is necessarily the best available for the purpose.

It’s All about the Benjamins


Fig. 2. UAC prompt. Users of Windows Vista or 7 saw this prompt in the experimental condition.

based on the payment amount and presence of the UAC warning. Finally, we collected various (anonymous) data from the systems of the participants who ran the application. We designed our application to report anonymous statistics using port 123, the port normally reserved for the Network Time Protocol, since we reasoned that outgoing connections on this port were least likely to be firewalled (beyond the HTTP ports, which were already in use on our server). We specifically collected the following information: – Experimental Condition— The application reported its version so we could examine if participants were disproportionally executing the application in either the experimental (UAC) or control condition. – Version— We collected Windows versions to examine whether participants were exposed to the UAC prompt (i.e., XP users who executed the application in the experimental condition were not exposed to the prompt because the UAC feature did not exist in their version). Additionally, this also gave us insights into how upto-date their software was (i.e., whether they had the latest service packs installed). – Process List— We collected a list of the image names for all running processes to examine whether participants were taking any security precautions. Specifically, we were looking for known anti-virus (AV) software, as well as indications that they were running the software from within a virtual machine (VM). – Virtual Machine Detection— To further detect whether our application was being executed from within a virtual machine (VM), we performed two heuristics. First, we used the Red Pill VM-detection routine to report the memory address of the Interrupt Descriptor Table (IDT) and number of CPUs [28]. Second, we reported the name of the motherboard manufacturer. Some VM software will report a generic manufacturer rather than forwarding this information from the host operating system. Obviously this is an incomplete list of VM-detection techniques, but we believe these heuristics allowed us to show a lower bound on VM usage. Other than collecting this information, our program did absolutely nothing other than running a timer and periodically reporting to our server that it was still running.


N. Christin et al.

Table 1. Experimental results showing the Mechanical Turk users who viewed the task; of those, the fraction who downloaded the program; and of those, the fraction who ran the program. The latter represents only those who submitted data to our server, and therefore is a lower bound. $0.01 $0.05 $0.10 $0.50 $1.00 viewed task 291 272 363 823 1,105 downloaded 141 49% 135 50% 190 52% 510 62% 738 67% ran 64 45% 60 44% 73 38% 294 58% 474 64%

After participants submitted valid payment codes at the end of the experiment, we invited them to complete an exit survey for a $0.50 bonus payment. This bonus payment was identical across all experimental conditions. This survey was designed to answer questions about the participants’ risk perceptions during and outside of the study, what security precautions they normally take, and various demographic information.

4 Results During the course of our five-week study, our task was viewed 2,854 times. This corresponds to 1,714 downloads, and 965 confirmed executions. We found that the proportion of participants who executed the program significantly increased with price, though even for a payment of $0.01, 22% of the people who viewed the task downloaded and executed the program (Table 1). This raises questions about the effectiveness of wellknown security advice when competing against the smallest of incentives. In this section we analyze our results with regard to differing behaviors among the pricing conditions, data collected from participants’ systems, and the exit survey data. 4.1 Price Points Overall, we observed statistically significant differences between the payment amounts with regard to the fraction of participants who downloaded the program after viewing the task description (χ24 = 59.781, p < 0.0005). Upon performing post-hoc analysis using Fisher’s exact test, we observed that this was due to differences between the $0.50 and $1.00 price points and each of the three lowest price points (p < 0.002 and p < 0.0005, respectively). We also observed that significantly more people downloaded the program when the price was raised from $0.50 to $1.00 (p < 0.030). Once participants downloaded the program, our only indication that they had executed it was when our server received data from the program. If a firewall prevented users from reaching port 123 on our server, we assumed that they had not run the program. Thus, our reported proportion of executes-to-downloads represents a lower bound. Nevertheless, we observed statistically significant differences between the conditions (χ24 = 58.448, p < 0.0005). As was the case with the proportion of downloads, post-hoc analysis showed that this was due to the increase in execution at the $0.50 and $1.00 price points (p < 0.01 and p < 0.0001, respectively). Likewise, significantly more people ran the program when the price was raised from $0.50 to $1.00 (p < 0.021).

It’s All about the Benjamins


Fig. 3. Task enrollment and completion rates. “Paid” corresponds to users who ran the program for a full hour and correctly submitted a payment code; “Ran” corresponds to users who ran the program, no matter how briefly; “Downloaded” and “Viewed” correspond to users who downloaded our program and viewed the task, respectively. We noticed statistically significant differences in participants’ behaviors at the $0.50 and $1.00 price points.

Thus, participants’ behaviors did not change relative to the price we were paying them until it was increased to $0.50, at which point not only did we see a dramatic increase in the total number of people viewing the task, but also a significant increase in the proportion who opted to perform the task. The increase in the former is purely an artifact of our experimental platform, Mechanical Turk, indicating that many users are unwilling to view tasks that pay less than $0.50. The latter increase is more important since we observed differences in participants’ behaviors as a function of price. These changes in participants’ behaviors continued as the price was further increased to $1.00. 4.2 Participant Behavior When we compared the participants who were exposed to the UAC dialog (Figure 2) with the participants in the control condition, we found no significant differences.4 Thus, regardless of price, the warning had no observable effect on participants’ willingness to download and execute an unknown program—even one that required them to explicitly grant it administrator-level permissions. Because of the lack of effect, we disregarded this variable in our analysis. Because the Red Pill VM-detection routine [28] only works reliably on single-CPU computers, we also collected information on the number of CPUs. Using Red Pill, we 4

It is possible that some participants in the UAC condition had disabled this feature, which we did not attempt to detect. However, we do not believe these participants confound our findings since they provide a realistic snapshot of UAC usage.


N. Christin et al.

detected the presence of a VM on a single participant’s machine. Examining each participants’ process lists, we were able to confirm that this participant was running VMware. Additionally, we detected VMware Tools running on an additional fifteen machines (sixteen in total), and Parallels Tools running on a single machine. Thus, we can confirm that at least seventeen participants (1.8% of 965) took the precaution of using a VM to execute our code.5 Eleven of these participants were in the $1.00 condition, five were in the $0.50 condition, and one was in the $0.01 condition. The information we collected on participants’ motherboards was not useful in determining VM usage. When the program was executed, we received a list of the other processes that each participant was running. In this manner we amassed a list of 3,110 unique process names, which we categorized as either corresponding to malware or security software using several online databases. Because we only recorded the image names and not the sizes, file locations, or checksums, these classifications are very rough estimates. We took great care to only classify something as malware after being completely certain; this created a lower bound on the number of infected systems. Conversely, we took a liberal approach to categorizing something as security software in order to create an upper bound; it is possible—indeed likely—that some of the image names masquerading as security software were actually malware. All in all, we found no significant differences between the pricing conditions with regard to malware infections or the use of security software; at least 16.4% (158 of the 965 who ran the program) had a malware infection, whereas as many as 79.4% had security software running (766 of 965). Surprisingly, we noticed a significant positive trend between malware infections and security software usage (φ = 0.066, p < 0.039). That is, participants with security software were more likely to also have malware infections (17.6% of 766), whereas those without security software were less likely to have malware infections (11.6% of 199). While counter-intuitive, this may indicate that users tend to exhibit risky behavior when they have security software installed, because they blindly trust the software to fully protect them. Once we increased the payment to $0.50, participants made different decisions with regard to their system security. We collected data on their Windows versions in order to examine if they had applied the most recent service packs. We considered this a proxy for risk aversion—similar to the use of security software; in the computer security context, higher risk aversion should correspond to an increase in up-to-date systems. Indeed, we observed this effect: 72.1% and 67.5% of participants who were paid $0.50 and $1.00, respectively, had a current version of Windows, compared to an average of 54.3% of participants who were paid less (p < 0.0005 for Fisher’s exact test). Finally, we found that the better-compensated participants also performed the task more diligently. We examined the Akeytsu License key of participants in each condition who submitted an invalid payment code (Table 2) and found that “cheating” significantly decreased as the payment increased (χ24 = 52.594, p < 0.0005). Likewise, we found a statistically significant correlation between time spent running the executable and payment amount (r = 0.210, p < 0.0005). 5

We do not know if this was a precaution or not. It is equally likely that participants were simply using VMs because they were Mac or Linux users who wanted to complete a Windows task.

It’s All about the Benjamins


Table 2. For each payment amount, each row depicts: the average number of minutes participants spent running the program (out of 60), the percentage of participants who ran the program for the full 60 minutes, and the percentage of participants who attempted to cheat by submitting an invalid payment code (out of the total number of submitted payment codes) $0.01 $0.05 $0.10 average runtime in minutes 39.30 43.67 43.63 percentage running the program for 60 minutes 59.4% 63.3% 67.1% percentage of invalid codes submitted 46.5% 28.3% 24.6%

$0.50 52.02 81.3% 12.8%

$1.00 52.65 82.3% 14.8%

Thus, we observed that not only did the $0.50 and $1.00 price points allow us to recruit more participants, but the overall quality—reliability and modernity—of the machines recruited and the diligence of their operators considerably increased as well. The measurements we obtained through instrumentation of our web site and of our software give us a pretty detailed picture of user behavior when facing decisions about whether to run untrusted code or not. We were curious to further examine the “at-risk” population of users who elected to run untrusted code, and thus completed our experiment, in terms of demographic and behavioral characteristics. To that effect, after completing the experimental portion of this study, we invited participants who had submitted a valid payment code to complete an exit survey for a bonus payment of $0.50. 4.3 Self-reported Data In the exit survey we collected information about participants’ demographics, their risk perceptions, and their stated security practices. Demographics. A total of 513 participants completed the survey. Forty percent of respondents came from India, 30% from the US or Canada, and the majority of the rest were from Europe.6 Using the United Nations’ list of developed countries [33], we partitioned participants based on their nationalities. We observed that the proportion of participants from the developed world increased with the payment amount: from 9.4% to 23.4%. We found that this increase was statistically significant when comparing the $0.01 condition to the $0.50 and $1.00 conditions (p < 0.02 and p < 0.01, respectively), but was not significant when comparing the other conditions. We did not observe significant differences with regard to age (mean μ = 28.1 years old, with standard deviation σ = 8.95 years) or gender (349 males and 164 females). We gauged participants’ security expertise with four questions: – – – – 6

Do you know any programming languages? Is computer security one of your main job duties? Have you attended a computer security conference or class in the past year? Do you have a degree in computer science or another technical field (e.g. electrical engineering, computer engineering, etc.)?

Thirty-three participants listed their nationalities as “Caucasian.” After unsuccessfully trying to locate Caucasia on a map, we omitted their responses to this question.


N. Christin et al.

We observed no differences between the conditions with regard to the individual conditions nor when we created a “security expert score” based on the combination of these factors. That is, the proportion of people who answered affirmatively to each of the questions remained constant across the price intervals. So why did participants execute the program? Security concerns. Windows’ automatic update functionality is disabled when it determines that the software may have been pirated. We observed a significant correlation between participants from the developing world and unpatched versions of Windows (φ = 0.241, p < 0.0005), which hints at a correlation between software piracy and the developing world. However, we do not believe that this effect is solely responsible for the changes in behavior as a function of price that we observed during our study. We asked participants to rate the danger of running programs downloaded from Mechanical Turk using a 5-point Likert scale. What we found was that as the price went up, participants’ perceptions of the danger also increased (F508,4 = 3.165, p < 0.014). Upon performing post-hoc analysis, we observed that risk perceptions were similar between the $0.01, $0.05, and $0.10 price points (μ = 2.43, σ = 1.226), as well as between the $0.50 and $1.00 price points (μ = 2.92, μ = 1.293); once the price hit $0.50 and above, participants had significantly higher risk perceptions (t511 = 3.378, p < 0.001). This indicates that as the payment was increased, people who “should have known better” were enticed by the payment; individuals with higher risk perceptions and awareness of good security practices did not take the risk when the payment was set at the lower points, but ignored their concerns and ran the program once the payment was increased to $0.50. These perceptions were not linked to demographic data. These results suggest that participants may have felt an increased sense of security based on the context in which our program was presented. As mentioned earlier, though, there was no actual proof this was a legitimate university study, as we purposefully stored the executable on a third-party server and did not provide any contact information. We periodically browsed Mechanical Turk user forums such as Turker Nation [2] and Turkopticon [3], and noticed with interest that our task was ranked as legitimate because we were paying users on time and anti-virus software was not complaining about our program. Clearly, neither of these statements is correlated in any way to potential security breaches caused by our software. Anti-virus software generally relies on malware being added to blacklists and may not complain about applications that were given full administrative access by a consenting user. Perhaps a warning to the effect that the application is sending network traffic would pop up, but most users would dismiss it as it is consistent with the purpose of a “Distributed Computing Client.” As such, it appears that people are reinforcing their false sense of security with justifications that are not relevant to the problem at hand. Last, one Turkopticon user also had an interesting comment. He incorrectly believed we were “running a port-scanner,” and was happy with providing us with potentially private information in exchange for the payment he received. This echoes Grossklags and Acquisti’s finding [11] that people do not value privacy significantly.

It’s All about the Benjamins


5 Discussion and Conclusions In this experiment, we observed what economists have known for centuries: it really is all about the Benjamins.7 We show in the security context how users can be motivated by direct incentive payments to ignore commonly known security advice. Even though around 70% of all our survey participants understood that it was dangerous to run unknown programs downloaded from the Internet, all of them chose to do so once we paid them. We also demonstrate that the strength of user participation is consistent with basic economic principles, so that more money equals more interest in the task, downloads and executions of potentially harmful code, and commitment to the instructions given [9, 15]. This study serves as a clarion call to security researchers and practitioners: security guidance should not only create incentives for compliance with advice and policies, but also explicitly include mechanisms to support users who are prone to trade away security cheaply. When investigating the correspondence between security concerns and security behaviors more closely, we found that subjects in the high payment condition were more concerned about the risks of downloading programs while doing online work. Similarly, these participants were on average better suited to withstand security threats (i.e., they owned more recently patched machines). In other words, more security-conscious and security-aware subjects were lured into our project with the higher available remuneration. On the one hand, these subjects act consistently in applying security patches to react to their heightened concerns. On the other hand, they waste all their hard security efforts by consensually inviting undocumented code on their machines and bypassing Windows security stopping blocks (i.e., UAC). There are a number of interpretations for our observations that fit the model of the bounded rational computer user. In particular, participation may be due to a strong pull of “immediate gratification” and revealing of time-inconsistent preferences [5]. Further, the task triggers “relative thinking.” That is, we have to consider the promised ease of our task in conjunction with a wage that is appealing relative to many more cumbersome Mechanical Turk offerings, e.g., annotating pictures or transcribing recordings [17]. As a result, individuals further neglect to consider the harm that they may inflict on themselves. Relative thinking also helps to account for the surprising diligence of our subjects in the $0.50 and $1.00 conditions. At the same time, even in the $0.01 condition, 22% of the users who viewed the task downloaded and ran the program. This indicates that even for a negligible remuneration, Internet users are willing to bear significant risk; indeed a better title for this paper may be “It’s All About The Abrahams.8” We further believe that a heightened sense of “false security” explains the involvement of many subjects. First, many of the more concerned individuals owned better patched machines and therefore may have erroneously believed that seemingly legitimate tasks were unlikely to cause harm. Second, this observation was mirrored by the entire user population. We found that the presence of security software was correlated with the number of infections with malicious code on user machines. Third, users’ 7 8

Benjamin Franklin is pictured on US $100 bills, hence the slang “Benjamin” for banknotes. Abraham Lincoln is pictured on US $0.01 coins.


N. Christin et al.

discussions of our task on message boards may have enticed others to participate despite the lack of any security assurances and tests. Moreover, such messages could have easily been posted by co-conspirators. To explain why rational individuals who are concerned about security and who undertake precautions (i.e., install patches and anti-virus software) nevertheless fail to act securely, we can draw from the safety regulation literature. Most prominently, Peltzman outlined in his theory of risk compensation that individuals evaluate the combined demand for safety and usage intensity (e.g., faster driving on a highway) [25]. In the online security context this means that users with more security-ready systems—irrespective of whether they safeguard themselves or administrators act for them—are comparatively more likely to engage in risky behaviors. For example, they may be more likely to seek free adult content or download undocumented code/media that increases their perceived utility from experience. While this process may be driven by selfish and rational considerations for at least some users, it always increases the risk of collateral damage due to negative externalities. In the online security context, this behavior is particularly significant because malware developers not only carefully conceal their code from users, but also limit the adverse impact on hosts’ systems. Distributed attacks may be implicitly tolerated by users as long as they are not directly affected, and lead to what Bill Cheswick likens to a “toxic waste dump” billowing blue smoke across the Internet [26]. Miscreants can benefit from these negative externalities by harvesting resources at a price far below the social cost they are inflicting on the user population. In conclusion, providing direct incentives for the installation of undocumented code constitutes another malware distribution channel. In this manner, the notion of a “Fair Trade” botnet is not out of the realm of possibility.9 The occurrence of such an approach may be relatively short-lived such as in previous examples with some free centralized systems (e.g., [20]), if distributed work platforms exercise considerably more control over the advertised tasks. However, a payment scheme could also give rise to unforeseen semi-legitimate business models that cannot be easily outlawed. We argue that once payment is high enough and inconvenience is low enough, users have little incentive to comply with security requests [13]. Worse, behavioral biases further magnify the impact of negative externalities in network security [12]. By allowing attackers to use their machines for further attacks, users themselves become a threat to the network. We hope this research will stimulate further discussions on how to realign user incentives, that have seemingly drifted very far away from what would be needed to produce a desirable outcome. Acknowledgments. This paper greatly benefited from discussions with Stuart Schechter, David Molnar, and Alessandro Acquisti. Lorrie Cranor assisted with IRB documentation. This work is supported in part by CyLab at Carnegie Mellon under grant DAAD1902-1-0389 from the Army Research Office, and by the National Science Foundation under ITR award CCF-0424422 (Team for Research in Ubiquitous Secure Technology) and IGERT award DGE-0903659 (Usable Privacy and Security). 9

“Fair Trade” is a certification for foods coming from producers who receive a living wage [1].

It’s All about the Benjamins


References 1. 2. Psiphon Pro 3.165 Free Download. 4.

5. 6. 7.








15. 16.

17. 18.


Fair Trade USA, Turker nation, Turkopticon, Acohido, B.: Are there 6.8 million – or 24 million – botted PCs on the Internet? (Last accessed September 16, 2010) Acquisti, A., Grossklags, J.: Privacy and rationality in individual decision making. IEEE Security & Privacy 3(1), 26–33 (2005) Baker, W., Hutton, A., Hylender, C., Novak, C., Porter, C., Sartin, B., Tippett, P., Valentine, J.: Data breach investigations report. In: Verizon Business Security Solutions (April 2009) Christin, N., Yanagihara, S., Kamataki, K.: Dissecting one click frauds. In: Proceedings of the Conference on Computer and Communications Security (CCS), Chicago, IL, pp. 15–26 (October 2010) Franklin, J., Paxson, V., Perrig, A., Savage, S.: An inquiry into the nature and causes of the wealth of internet miscreants. In: Proceedings of ACM Conference on Computer and Communications Security (CCS), Alexandria, VA, pp. 375–388 (October 2007) Gaechter, S., Fehr, E.: Fairness in the labour market - A survey of experimental results. In: Bolle, F., Lehmann-Waffenschmidt, M. (eds.) Surveys in Experimental Economics, Bargaining, Cooperation and Election Stock Markets. Physica Verlag (2001) Good, N., Dhamija, R., Grossklags, J., Aronovitz, S., Thaw, D., Mulligan, D., Konstan, J.: Stopping spyware at the gate: A user study of privacy, notice and spyware. In: Proceedings of the Symposium on Usable Privacy and Security (SOUPS 2005), Pittsburgh, PA, pp. 43–52 (July 2005) Grossklags, J., Acquisti, A.: When 25 cents is too much: An experiment on willingness-tosell and willingness-to-protect personal information. In: Proceedings (online) of the Sixth Workshop on Economics of Information Security (WEIS), Pittsburgh, PA (2007) Grossklags, J., Christin, N., Chuang, J.: Secure or insure? A game-theoretic analysis of information security games. In: Proceedings of the 2008 World Wide Web Conference (WWW 2008), Beijing, China, pp. 209–218 (April 2008) Herley, C.: So long, and no thanks for the externalities: The rational rejection of security advice by users. In: Proceedings of the New Security Paradigms Workshop (NSPW), Oxford, UK, pp. 133–144 (September 2009) Herley, C., Florˆencio, D.: Nobody sells gold for the price of silver: Dishonesty, uncertainty and the underground economy. In: Proceedings (online) of the Eighth Workshop on Economics of Information Security (WEIS) (June 2009) Horton, J., Rand, D., Zeckhauser, R.: The online laboratory: Conducting experiments in a real labor market. Harvard Kennedy School and NBER working paper (May 2010) Jakobsson, M.: Experimenting on Mechanical Turk: 5 How Tos (July 2009), Kahneman, D., Tversky, A.: Choices, values and frames. Cambridge University Press, Cambridge (2000) Kanich, C., Kreibich, C., Levchenko, K., Enright, B., Voelker, G., Paxson, V., Savage, S.: Spamalytics: An empirical analysis of spam marketing conversion. In: Proceedings of the Conference on Computer and Communications Security (CCS), Alexandria, VA, pp. 3–14 (October 2008) Kittur, A., Chi, E., Suh, B.: Crowdsourcing User Studies with Mechanical Turk. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2008), Florence, Italy, pp. 453–456 (2008)


N. Christin et al.

20. Kucera, K., Plaisent, M., Bernard, P., Maguiraga, L.: An empirical investigation of the prevalence of spyware in internet shareware and freeware distributions. Journal of Enterprise Information Management 18(6), 697–708 (2005) 21. Matwyshyn, A.: Penetrating the zombie collective: Spam as an international security issue. SCRIPT-ed 4 (2006) 22. Moore, T., Clayton, R., Anderson, R.: The economics of online crime. Journal of Economic Perspectives 23(3), 3–20 (2009) 23. Moore, T., Edelman, B.: Measuring the Perpetrators and Funders of Typosquatting. In: Sion, R. (ed.) FC 2010. LNCS, vol. 6052, pp. 175–191. Springer, Heidelberg (2010) 24. Namestnikov, Y.: The economics of botnets. In: Analysis on Viruslist. com, Kapersky Lab (2009) 25. Peltzman, S.: The effects of automobile safety regulation. Journal of Political Economy 83(4), 677–726 (1975) 26. Reeder, R., Arshad, F.: SOUPS 2005. IEEE Security & Privacy 3(5), 47–50 (2005) 27. Ross, J., Zaldivar, A., Irani, L., Tomlinson, B.: Who are the Turkers? Worker Demographics in Amazon Mechanical Turk. Technical Report SocialCode-2009-01, University of California, Irvine (2009) 28. Rutkowska, J.: Red pill. or how to detect VMM using (almost) one CPU instruction (November 2004), 29. Saroiu, S., Gribble, S., Levy, H.: Measurement and analysis of spyware in a university environment. In: Proceedings of the 1st USENIX Symposium on Networked Systems Design & Implementation (NSDI 2004), San Francisco, CA, pp. 141–153 (2004) 30. Stone-Gross, B., Cova, M., Cavallaro, L., Gilbert, B., Szydlowski, M., Kemmerer, R., Kruegel, C., Vigna, G.: Your botnet is my botnet: Analysis of a botnet takeover. In: Proceedings of the Conference on Computer and Communications Security (CCS), Chicago, IL, pp. 635–647 (October 2009) 31. Symantec Corp. Symantec global internet security threat report trends for 2009 (April 2010) 32. Thomas, R., Martin, J.: The underground economy: Priceless. Login 31(6), 7–16 (2006) 33. United Nations Statistics Division. Composition of macro geographical (continental) regions, geographical sub-regions, and selected economic and other groupings (April 2010), 34. Wilson, C.: Botnets, cybercrime, and cyberterrorism: Vulnerabilities and policy issues for congress. In: Library of Congress Washington DC Congressional Research Service (January 2008) 35. Zeltser, L.: So long script kiddies. Information Security Magazine (2007)

Evaluating the Privacy Risk of Location-Based Services Julien Freudiger, Reza Shokri, and Jean-Pierre Hubaux LCA1, EPFL, Switzerland [email protected]

Abstract. In modern mobile networks, users increasingly share their location with third-parties in return for location-based services. Previous works show that operators of location-based services may identify users based on the shared location information even if users make use of pseudonyms. In this paper, we push the understanding of the privacy risk further. We evaluate the ability of locationbased services to Psiphon Pro 3.165 Free Download users and their points of interests based on different sets of location information. We consider real life scenarios of users sharing location information with location-based services and quantify the privacy risk by experimenting with real-world mobility traces.

1 Introduction In traditional cellular networks, users share their location with their network operator in order to obtain voice and data services pervasively. With the emergence of data services, users increasingly share their location with other parties such as location-based services (LBSs). Specifically, users first obtain their location by relying on the localization capability of their mobile device (e.g., GPS or wireless triangulation), share it with LBSs and then obtain customized services based on their location. Yet, unlike cellular operators, LBSs are mainly provided for free and generate revenue with location-based advertisement. Hence, there is a key difference between the business models of LBS providers and cellular operators: LBS providers aim at profiling their users in order to serve tailored advertisement. Subscribers of cellular networks know that their personal data is contractually protected. On the contrary, users of LBSs often lack an understanding of the privacy implications caused by the use of LBSs [17]. Some users protect their privacy by hiding behind pseudonyms that are mostly invariant over time (e.g., usernames). Previous works identified privacy threats induced by the use of LBSs and proposed mechanisms to protect user privacy. Essentially, these mechanisms rely on trusted third-party servers that anonymize requests to LBSs. However, such privacy-preserving mechanisms are not widely available AOMEI OneKey Recovery Pro License key users continue sharing their location information unprotected with third-parties. Similarly, previous work usually considers the worst-case scenario in which users continuously upload their location to third-parties (e.g., traffic monitoring systems). Yet, with most LBSs, users do not share their location continuously but instead, connect episodically to LBSs depending on their needs and thus reveal a few location samples of their entire trajectory. For example, a localized search on Google Maps [19] only reveals a location sample upon manually connecting to the service. G. Danezis (Ed.): FC 2011, LNCS 7035, pp. 31–46, 2012. c Springer-Verlag Berlin Heidelberg 2012 


J. Freudiger, R. Shokri, and J.-P. Hubaux

In this work, we consider a model that matches the common use of LBSs: we do not assume the presence of privacy-preserving mechanisms and consider that users access LBSs on a regular basis (but not continuously). In this setting, we aim at understanding the privacy risk caused by LBSs. To do so, we experiment with real mobility traces and investigate the dynamics of user privacy in such systems by measuring the erosion of user privacy. In particular, we evaluate the success of LBSs in predicting the true identity of pseudonymous users and their points of interest based on different samples of mobility traces. Our results explore the relation between the type and quantity of data collected by LBSs and their ability to de-anonymize and profile users. We quantify the potential of these threats by carrying out an experimentation based on real mobility traces from two cities, one in Sweden and one in Switzerland. We show that LBS providers are able to uniquely identify users and accurately profile them based on a small number of location samples observed from the users. Users with a strong routine face higher privacy risk, especially if their routine does not coincide with that of others. To the best of our knowledge, this work is among the first to investigate the erosion of privacy caused by the sharing of location samples with LBSs using real mobility traces and to quantify the privacy risk.

2 State of the Art Location-based services [1, 19, 12, 30] offer users to discover their environment. While some services such as road-traffic monitoring systems require users to continuously share their location, in most services, users share their location episodically. We consider users that manually reveal few samples of their mobility. The IETF Geopriv working group [3] aims at delivering specifications to implement location-aware protocols in a privacy-conscious fashion. It suggests that independent location servers deliver data to LBSs according to user-controlled privacy policies. We complement the IETF proposal by quantifying the privacy threat induced by sharing specific locations with LBSs. Privacy-preserving mechanisms (PPMs) impede LBSs from tracking and identifying their users [5, 20, 21, 22, 24, 32, 43]. Most mechanisms alter users’ identifiers or location samples, and either run on third-party anonymizing servers or directly on mobile devices. PPMs’ effectiveness is usually evaluated by measuring the level of privacy [10,38,41]. PPMs are rarely used in practice. One reason may be that users do not perceive the privacy threat because they are not intensively sharing their location. We aim at clarifying the privacy threat when users disregard PPMs. Without PPMs, pseudonymous location information identifies mobile users [4, 6, 26, 33]. Beresford and Stajano [4] identified all users in continuous location traces. Using GPS traces from vehicles, two studies by Hoh et al. [23] and Krumm [26] found home addresses of most drivers. De Mulder et al. [33] could identify mobile users in a GSM cellular network from pre-existing location profiles. These works rely on continuous location traces precisely capturing users’ whereabouts. Yet, in most scenarios, users share location samples episodically. Partridge and Golle [18] could identify most US working population using two location samples, i.e., approximate home and work locations. It remains unclear however whether users of LBSs face this threat. We consider real-life

Evaluating the Privacy Risk of Location-Based Services

Internet User Uploads location

e Wireless Infrastructure WiFi or cellular


LBS Pr Provides location-based services

Fig. 1. System model. Users episodically upload their Psiphon Pro 3.165 Free Download wirelessly to LBSs.

scenarios and investigate associated privacy risks. Ma et al. [31] study the erosion of privacy caused by published anonymous mobility traces and show that an adversary can relate location samples to published anonymous traces. We push further the analysis by considering an adversary without access to anonymized traces.

3 System Model We present the assumptions regarding LBSs and the associated privacy threats. 3.1 Network Model We study a network (Fig. 1) of mobile users equipped with wireless devices communicating with LBSs via a wireless infrastructure. Wireless devices feature localization technology such as GPS or wireless triangulation for users’ localization. The geographic location of a user is denoted by l. The wireless infrastructure relies on technology such as WiFi, GSM or 3G. LBSs are operated by independent third-parties that provide services based on users’ location. Users send their location together with a service request to LBSs. For each request, users may identify themselves to the LBS using proper credentials. We assume that users are identified with pseudonyms (i.e., fictitious identifiers), such as their username, HTTP cookie or IP address. Some services may require users registration with username and password, whereas others may use HTTP cookies to recognize users. LBSs provide users with services using the location information from requests. LBSs store collected users’ locations in a database. Each location sample is called an event [39] denoted by < i, t, l >, where i is a user’s pseudonym, t is the time of occurrence, and l is a user’s location. A collection of user events forms a mobility trace. 3.2 Threat Model LBS operators passively collect information about the locations of pseudonymous users over time. For example, an LBS can observe location samples of user A over the course of weeks (Fig. 2). A priori, the LBS also knows the map over which users move and has access to geographic information systems that provide details such as points of interest in maps [8]. In addition, the LBS may use the increasing amount of location information available from other sources such as public transportation systems [37]. We consider that the LBS aims at obtaining the true identify of its users and their points of interest. To do so, the LBS studies the collected information. Even if communications are pseudonymous, the spatio-temporal correlation of location traces may


J. Freudiger, R. Shokri, and J.-P. Hubaux

User A Monday 8am Tuesday 8am

User A Wednesday 12pm

User A Monday 6pm Thursday 5pm

User A Friday 5pm

Fig. 2. Threat model. The LBS learns multiple locations of user A over several days. It can then infer the activities of user A and possibly obtain his true identity.

serve as a quasi-identifier [5, 6, 9, 18]. This is a significant threat to user privacy as such information can help the LBS to profile users.In this work, we investigate the ability of LBSs to identify users and their points of interest based on the collected information.

4 Privacy Erosion In this section, we describe the process of privacy erosion caused by the use of LBSs. To do so, we first discuss how users tend to share their location with LBSs. Then, we explain how LBS operators can obtain the true identity of their users and their points of interest from the collected information. 4.1 Collection of Traces by LBSs Depending on the provided service, LBSs collect location samples about their users. For example, services (such as Foursquare, Google Maps, Gowalla, or Twitter) offer users to connect with friends, to discover their environment or to optimize their mobility. Depending on their needs, users access such LBSs at different times and from different locations, thus revealing multiple location samples. In general, users of these services do not continuously share their locations. Instead, users have to manually decide to share their location with their mobile devices. We classify popular LBSs into three broad categories and describe the shared information in each case. Localized Search. Many LBSs enable users to search for services around a specific location (e.g., localized Google Search [19]). Localized searches offer mobile subscribers spontaneous access to nearby services. A user location acts as a spatial query to the LBS. For example, users can obtain the location of nearby businesses, products or other local information depending on the type of information provided by the LBS. Such information helps users navigate unfamiliar regions and discover unknown places. Users episodically connect to LBSs, revealing samples of their mobility. LBSs obtain statistical information about visited locations and learn popular locations and user habits. Yet, LBSs do not learn the actual activity of mobile users (e.g., the name of the restaurant) as they do not know users’ decision about the provided information.

Evaluating the Privacy Risk of Location-Based Services


Street Directions. Another popular use of LBSs consists in finding a route between two locations. Typically, a user shares its location with an LBS and requests the shortest route to another location (e.g., Google Maps). Users of such services usually reveal their current location and a potential destination. Hence, users may leak their home/work locations to LBSs in addition to samples of their mobility. This enables LBSs to obtain statistical information about the preferred origins and destinations of mobile users. Location Check-ins. A novel type of location-based service offers users to checkin to specific places in return for information related to the visited location [12]. For example, it can be used to check into shops, restaurants or museums. It allows users to meet other users that share similar interests and to discover new aspects of their city through recommendations [30]. With such services, users not only precisely reveal their location (GPS coordinates), but also their intention. Indeed, the LBS can learn the current activity of its users. Users can check-in to public places, but also private homes. In summary, depending on the provided service, users share different samples of their mobility. In order to take this into account, in the following, we consider that LBSs obtain various type of location samples out of users’ whereabouts. 4.2 Attacks by LBSs The spatial and temporal information contained in mobility traces may serve as locationbased quasi-identifiers [6,9]: an LBS may obtain the true identity and points of interests of its pseudonymous users from the collected mobility traces. Location-Based Quasi-Identifiers. Quasi-identifiers were introduced by Delenius [9] in the context of databases. They characterize a set of attributes that in combination can be linked to identify to whom the database refers (see [34] for more). In [6], Bettini et al. extend the concept to mobile networking. They consider the possibility to identify users based on spatio-temporal data and propose the concept of location-based quasiidentifiers. They describe how a sequence of spatio-temporal constraints may specify a mobility pattern that serve as a unique identifier. For example, as already mentioned, Golle and Partridge [18] identify location-based quasi-identifiers by showing that home and work locations uniquely identify most of the US population. Hence, if an LBS learns users’ home and work locations, it can obtain their identity with high probability. In this work, we assume that the LBS succumbs to the temptation of finding the identify of its users. To do so, we consider that the LBS uses the home and work locations of users as location-based quasi-identifiers. Inferring Home and Work Locations. Previous works investigated the problem of characterizing and extracting important places from pseudonymous location data. These works propose various algorithms to infer important locations based on the spatial and temporal evidence of the location data. We group existing work in two categories. In the first category [2, 23, 26], the authors use clustering algorithms to infer the homes of mobile users. For example in [26], Krumm proposes four different clustering techniques to identify the homes of mobile users in vehicular traces: traces are pseudonymous and


J. Freudiger, R. Shokri, and J.-P. Hubaux

contain time-stamped latitudes and longitudes. Similarly, in [23], Hoh et al. propose a k-mean clustering algorithm to identify the homes of mobile users in anonymous vehicular traces: traces do not have pseudonyms, but contain the speed of vehicles, in addition to their location. In the second category [28, 29], Liao et al. propose machine learning algorithms to infer the different type of activities from mobile users data (e.g., home, work, shops, restaurants). Based on pedestrian GPS traces, the authors are able to identify (among other things) the home and work locations of mobile users. We rely on previous work to derive an algorithm that exclusively infers users’ home and work locations based on spatial and temporal constraints of location traces. The algorithm has two steps: first, it spatially clusters events to identify frequently visited regions; second, it temporally clusters’ events to identify home/work locations. The spatial clustering of the events uses a variant of the k-means algorithm as defined in [2]: it starts from one random location and a radius. All events within the radius of the location are marked as potential members of the cluster. The mean of these points is computed and is taken as the new centre point. The process is repeated until the mean stops changing. Then, all the points within the radius are placed in the cluster and removed from consideration. The procedure repeats until no events remain. The number of points falling into a cluster corresponds to its weight and is stored along with the cluster location. Clusters with a large weight represent frequently visited locations. Based on the output of the spatial filtering, the algorithm uses temporal evidence to further refine the possible home/work locations. In practice, users have different temporal patterns depending on their activities (e.g., students). The algorithm considers simple heuristics that apply to the majority of users. For example, most users spend the night at home and commute in the beginning/end of the day. The algorithm considers all events in each cluster and labels them as home or work. Some events may remain unlabeled. The algorithm considers two temporal criteria: First, the algorithm checks the duration of stay at each cluster. To do so, it computes the time difference between the arrival and departure at a certain cluster. A user that stays more than 1 hour in a certain cluster over night is likely to have spent the night at home. Hence, the algorithm labels events occurring at such cluster as home events. Second, the algorithm labels events occurring after 9am and before 5pm as work events. Finally, for each cluster, the algorithm checks the number of events labelled home/work and deduces the most probable home/work locations. Inferring User Points of Interest. Usually, LBSs use the content of queries to infer the points of interest of their users. Yet, LBSs may further profile users by analyzing the location of multiple queries and inferring users’ points of interest. We use the spatial clustering algorithm defined above to obtain the possible points of interest of users that we call uPOIs: a uPOI is a location regularly visited by a user. For each identified uPOI, we store the number of visits of the user and derive the probability Pvi that Home Designer Crack user i visits a specific uPOI, i.e., the number of visits to a uPOI normalized by the total number of visits to uPOIs. Metrics. The real home/work addresses are unavailable in our data sets. Hence, we apply our algorithm to the original mobility traces and derive a baseline of home/work locations. We then evaluate the probability of success of the LBS by comparing the baseline to the outcome of our algorithm on the samples of location data collected by

Evaluating the Privacy Risk of Location-Based Services


the LBS. In other words, we compare the home/work location pairs predicted from the sampled traces with the baseline. In practice, it is complicated to obtain the real home and work locations of users (i.e., the ground truth) in mobility traces without threatening their privacy. Because no real ground truth is available, this approach does not guarantee that we have identified the real home/work locations. Yet, it allows us to compare the effectiveness of the attack in various conditions. The probability Ps of a successful identification by the LBS is then: Ps =

Number of home/work pairs correctly guessed Total number of home/work pairs


This probability measures the ability of LBSs to find the home/work locations from sampled traces and thus uniquely identify users. This metric relies on the assumption that home/work location pairs uniquely identify users [18] (in Europe as well): it provides an upper-bound on the identification threat as home/work location pairs may in practice be insufficient to identify users especially in the presence of uncertainty about the home/work locations. We also evaluate the normalized anonymity set of the home and work pairs of mobile users. To do so, we compute the number of home/work locations that are in a certain radius from the home/work location of a certain user i. For every user i, we define its home location as hi and its work location as wi. For each user i, we have: 1  1 Windows 10/8/7 a VPN (virtual private available in — If you want a advantage, but you don't Both the VPN and VPN, SSH, and HTTP 32 bit) Download Android get around … Unduh Psiphon 162 untuk Windows secara gratis dan bebas virus di Uptodown. Coba versi terbaru dari Psiphon 2021 untuk Windows

Download Psiphon für Windows kostenlos Psiphon Pro 3.165 Free Download

Notice: Undefined variable: z_bot in /sites/ on line 123

Notice: Undefined variable: z_empty in /sites/ on line 123

2 thoughts on “Psiphon Pro 3.165 Free Download

  1. sir, its just keep giving me " channel not found in installation skipping load routine " two or three days after i install lumion,, any solution ?

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top