Translate

Sunday, 30 September 2018

High-Profile Instagram Accounts Hacked For Ransom In A Recent Campaign

Instagrammers, particularly those who have influential profiles, should get ready to tackle any troublesome situation. Hackers now target high-profile Instagram

High-Profile Instagram Accounts Hacked For Ransom In A Recent Campaign on Latest Hacking News.



Mutagen Astronomy – Linux Vulnerability Hits CentOS, Debian, and Red Hat Distros

Researchers have discovered a critical vulnerability that allegedly affects multiple Linux distros. The vulnerability named Mutagen Astronomy allows an attacker

Mutagen Astronomy – Linux Vulnerability Hits CentOS, Debian, and Red Hat Distros on Latest Hacking News.



Facebook Says Three Different Bugs Are Responsible For The Massive Account Hacks

Just recently, Facebook disclosed a massive hacking attack on 50 million accounts. To mitigate the attack, Facebook had to reset

Facebook Says Three Different Bugs Are Responsible For The Massive Account Hacks on Latest Hacking News.



Facebook Ad Targeting Exploits Users’ 2FA Phone Numbers

Despite facing criticism and a heavy fine, Facebook does not seem to be backing off of its annoying steps. Recently,

Facebook Ad Targeting Exploits Users’ 2FA Phone Numbers on Latest Hacking News.



Apple DEP Authentication Flaw Leaves Devices Vulnerable To Malicious MDM Enrolling

Researchers discovered a vulnerability in the Apple’s Device Enrollment Program (DEP). This Apple DEP authentication flaw could allow potential attackers

Apple DEP Authentication Flaw Leaves Devices Vulnerable To Malicious MDM Enrolling on Latest Hacking News.



Firefox Monitor Has Begun To Track Breached Email Addresses

Mozilla has finally launched Firefox Monitor a website that connects to the TroyHun’s Have I Been Pwned? (HIBP) one of

Firefox Monitor Has Begun To Track Breached Email Addresses on Latest Hacking News.



DoorDash Customers Possibly Suffered Credential Stuffing Attack

For almost a month, the customers of the online food delivery company, DoorDash, flooded social media platforms with reports of

DoorDash Customers Possibly Suffered Credential Stuffing Attack on Latest Hacking News.



Saturday, 29 September 2018

Mojave Flaws Allow An Attacker To Bypass Full Disk Access Requirement

Right after the launch of the latest MacOS Mojave, researchers have begun discovering various security vulnerabilities. Amidst the claims of

Mojave Flaws Allow An Attacker To Bypass Full Disk Access Requirement on Latest Hacking News.



Uber has agreed to pay more than $140 Million for a data breach settlement

The Ride Sharing Company Uber has agreed to pay $148 million dollars to settle the massive data breach in 2016

Uber has agreed to pay more than $140 Million for a data breach settlement on Latest Hacking News.



Zero-Day MacOS Mojave Privacy Bypass Bug Exposes Protected Files

A security researcher discovered a zero-day vulnerability in the MacOS Mojave that allows hackers to access secured system files. This

Zero-Day MacOS Mojave Privacy Bypass Bug Exposes Protected Files on Latest Hacking News.



A Top Facebook Bug Bounty Hunter Shares Their Insights on the Facebook Breach

Pranav Hiverekar, one of the top Facebook bug bounty hunters/hackers, shares his insights on the Facebook breach What is your

A Top Facebook Bug Bounty Hunter Shares Their Insights on the Facebook Breach on Latest Hacking News.



Facebook Hacked — 10 Important Updates You Need To Know About

If you also found yourself logged out of Facebook on Friday, you are not alone. Facebook forced more than 90 million users to log out and back into their accounts in response to a massive data breach. On Friday afternoon, the social media giant disclosed that some unknown hackers managed to exploit three vulnerabilities in its website and steal data from 50 million users and that as a


Friday, 28 September 2018

Critical Security Vulnerability in Facebook Affects 50 million Users!

Facebook recently released a press update about a critical security flaw affecting its application, which they promptly fixed after it

Critical Security Vulnerability in Facebook Affects 50 million Users! on Latest Hacking News.



Zoho Was Blacklisted by Domain Registrar TierraNet

Cloud Software and Service Company Zoho was down when its domain Registrar has blocked the domain name consequently disrupting services

Zoho Was Blacklisted by Domain Registrar TierraNet on Latest Hacking News.



Hackers Steal 50 Million Facebook Users’ Access Tokens Using Zero-Day Flaw

2018 has been a terrible year for Facebook. Facebook just admitted that an unknown hacker or a group of hackers exploited a zero-day vulnerability in its social media platform that allowed them to steal secret access token for more than 50 million accounts. In a brief blog post published Friday, Facebook revealed that its security team discovered the attack three days ago (on 25 September),


How to Automate App Deployment to Alibaba ECS with Mina

This article was created in partnership with Alibaba Cloud. Thank you for supporting the partners who make SitePoint possible.

Think you got a better tip for making the best use of Alibaba Cloud services? Tell us about it and go in for your chance to win a Macbook Pro (plus other cool stuff). Find out more here.

Mina is a deployment automation tool and a deploy Bash script generator from the Rails world, which came into the spotlight after development companies noticed its advantages over Capistrano. Mina, in contrast to Capistrano, uses only one SSH connection to the deployment server, and executes a batch of bash commands there. This makes it a lot faster than Capistrano, which opens a separate SSH session for every command.

In this article we will go through setting up Mina for the deployment of a basic Django app - an unorthodox toolset for the Django world, which tends to use Docker or Fabric more. Given Mina's simplicity and flexibility, we feel it is worth exploring its use in the deployment of Python web apps.

Django, a "web framework for perfectionists with deadlines," has been around for some time now. It started off as a content-management oriented web framework, created in-house by web developers at Lawrence Journal World for its news web portal. It was published in 2005, and from there it took off and the rest is history. It became one of the most serious and widely adopted web frameworks, competing with Ruby on Rails. It is in use by Instagram, Disqus, the Washington Times, Mozilla, Bitbucket and others. It's still thriving.

Django docs suggest Apache with mod-wsgi as the first-choice, and it may be a prevalent option. But since we are performance-obsessed, for this tutorial we decided to cover the deployment of a Django application to Alibaba's ECS cloud instance with an NGINX and uWSGI stack.

NGINX is a web server notorious for its efficiency, being event-based, and it includes caching options, so it is often an ideal solution. uWSGI is an application server container - an implementation of WSGI, Python's standard web interface. It plays along with NGINX very well.

Getting Started

The first thing we will do is create our ECS instance in the Alibaba Cloud backend console.

The process is straightforward. We will choose Ubuntu 16.04 LTS for our operating system / OS image. Upon creation, we will want to make sure our instance is assigned to proper security groups. In Alibaba terminology, these are firewall rules for different ports. This is usually something that works by default, but in case of any issues with web access to our instance later on, make sure to check this off.

The security groups page can be accessed through the Elastic Compute Service submenu on the left.

The next thing to do upon creation of our instance is to set it up for SSH key access.

Perhaps the most straightforward way to do this is to set the instance up, at creation, with a password. Then we can just do the standard ssh-copy-id from our starting system - presumably a local device.

ssh-copy-id root@xxx.xxx.xxx.xxx, executed from our local device (where we will replace the xxx... sequence with our Alibaba ECS instance public IP address) will prompt us for the password, and upon typing it, our key-based access should be set up.

When we log into our instance via ssh, we will do apt-get update to make sure our apt sources are up to date, and then we install git, curl, wget: apt-get install git curl wget -y

Installing the Server Environment

The default Python version that comes as default with Ubuntu 16.04 LTS is the ancient 2.7. In order to run the latest version of Django, we need Python 3+. One of the less painful ways to fix this is to install pyenv, a Python version manager.

It allows us to change the Python version used globally, or per-project. Before we install pyenv, as per the pyenv wiki, we will install prerequisites:

apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev liblzma-dev zlib1g-dev libffi-dev

Then we can install pyenv:

curl -L https://github.com/pyenv/pyenv-installer/raw/master/bin/pyenv-installer | bash

Upon completion, the pyenv installer will prompt us to add a couple of lines to the ~/.bash_profile, which we will do:

Now we update the PATH in our working session by doing source ~/.bash_profile in our terminal.

Provided that we did this correctly, we should now be able to install Python version 3.7.0:

Doing pyenv versions in the server terminal should show us two items now: system and 3.7.0, presuming that we installed the 3.7.0 version successfully.

pyenv global 3.7.0 will make our 3.7.0 version the global python version on our system. Should you have issues with pyenv, this is the url to visit.

Server stack

The usual default with Ubuntu images is Apache server, which comes preinstalled. If it is running, we should stop it with service apache2 stop, and then install nginx with apt-get install nginx -y. This should install and start the NGINX server, which should be visible when we visit our server's public IP address.

We will also install uWSGI: pip install uwsgi (Python pip is presumably installed when we installed pyenv).

We will also make sure we have Django installed: pip install django. We could be using virtualenv here to ensure a contained isolated environment for our app, but for the sake of keeping this tutorial simple, we will skip it.

In more complex cases, though, it is probably a wise choice.

This guide presumes we have directed our domain's A records to our server IP address, so myxydomain.com is presumed in the rest of this guide to be pointed to our ECS server's public IP.

Now we will create the NGINX virtual host for our website. The file can be found here. We will just go over couple of things:

server unix:///tmp/minaguide.sock;

Here we are connecting - with NGINX - to the Unix socket that uWSGI will create in the /tmp directory, and /tmp is recommended for otherwise possible permissions complications that may arise from a more complex directory tree. This is what /tmp is for.

include /root/project/minaguide/uwsgi/uwsgi_params;

This (/root/project/minaguide) is a directory of our django project on the server, and within it, we will have a uwsgi subdirectory with a uwsgi_params file, which will hold some uWSGI variables. We will come back to this later, and to setting up our uWSGI app.

We now have the base server environment we need for deployment.

Setting Up Mina

We are setting up Mina on the machine from which we are doing the deployment. Presuming the device is also Linux / Ubuntu (and things shouldn't be much different for Mac users - nor for Windows users, as long as they use Windows Subsystem for Linux ), we will want to make sure we have Ruby and rubygems installed - apt-get install ruby -y should do the trick.

When we have done this, we will have the gem command available, so we will do gem install mina.

Now - also on our local machine - we will create a directory dedicated to our project, and do mina init there.

This creates a config/deploy.rb file in our project directory, which we will edit to configure Mina:

The post How to Automate App Deployment to Alibaba ECS with Mina appeared first on SitePoint.



Create Advanced Cloud Deployment Workflows with Mina

This article was created in partnership with Alibaba Cloud. Thank you for supporting the partners who make SitePoint possible.

Mina is a fast deployer and server automation tool, with advanced features and powerful extensibility. Learn how Mina can make your deployment process better, how to install it, how to extend it with plugins, and run through your first automated workflow. Then learn how to use Mina to migrate databases and websites, and set up even more advanced workflows with tools like WP CLI. We'll be using Alibaba Cloud ECS for this tutorial.

Think you got a better tip for making the best use of Alibaba Cloud services? Tell us about it and go in for your chance to win a MacBook Pro (plus other cool stuff). Find out more here.

The post Create Advanced Cloud Deployment Workflows with Mina appeared first on SitePoint.



Former NSA Employee Gets 5 Years in Jail For Holding Classified Data

A former Employee of NSA has got fives years of jail for holding classified data. The Department of Justice (DoJ)

Former NSA Employee Gets 5 Years in Jail For Holding Classified Data on Latest Hacking News.



Chegg Resets Passwords After Data Breach That Affected 40 Million Users

For all students out there using EasyBib, it’s time to reset your account passwords at Chegg. Reportedly, Chegg reset the

Chegg Resets Passwords After Data Breach That Affected 40 Million Users on Latest Hacking News.



Julian Assange will no longer be editor-in-chief of WikiLeaks

Julian Assange, the founder of popular whistleblower website WikiLeaks, is stepping down from the position of editor-in-chief of the organisation under "extraordinary circumstances." Assange, the 47-year-old Australian hacker, founded WikiLeaks in 2006 and has since made many high-profile leaks, exposing 'dirty' secrets of several individuals, political parties as well as government


Who’s behind DDoS attacks at UK universities?

The timing of the attacks suggests that many attempts to take the networks offline may not necessarily be perpetrated by organized cybercriminal gangs

The post Who’s behind DDoS attacks at UK universities? appeared first on WeLiveSecurity



Microsoft is trying to kill passwords in Azure AD application

Microsoft are quietly trying to eliminate passwords, the company has made an announcement that users of Windows 10 and Office

Microsoft is trying to kill passwords in Azure AD application on Latest Hacking News.



Week in security with Tony Anscombe

ESET researchers have discovered the first in-the-wild UEFI rootkit. Dubbed LoJax, the research team has shown that the Sednit operators used different components of the LoJax malware against numerous countries in Europe

The post Week in security with Tony Anscombe appeared first on WeLiveSecurity



What Is VPS Hosting And Why Do You Need It?

A VPS is a virtualized server that has the tendency to mimic the dedicated server within the environment of hosting. In simple words, it is technically a virtual shared hosting but has the characteristics of a dedicated hosting.

To have your website running at its full potential online, you need to have a good hosting environment. VPS hosting is one of the major three types of hosting that businesses these days preferred to opt for.    

The service is provided by several hosting companies and if you want to try one, go for Hostinger Virtual Private Server (VPS). They provide performance-based VPS plans at a competitive cost. Their live chat support helps you to troubleshoot just in case you have a query.

What Is VPS Hosting And Why Do You Need It

How Does VPS Platform Function?

As the name suggests, VPS hosting has a virtual aspect. It works on the Virtualization technology that divides one powerful service into multiple virtual servers. It’s like having your own virtual dedicated server (one of many) with regards to complete isolation.

Even though your VPS service is tethered to one physical server, it offers you the element of privacy in services. The virtual server is only reserved for you and only you can utilize all the resources provided to you by your VPS hosting provider.

Compartmentalization Analogy

The compartmentalization of multiple components (RAM, Virtual CPU specs, hard disk space) derives from the splitting of the physical server into several independent virtual servers. This compartmental methodology can be assumed for segmenting a hard drive into multiple drives.

This is also applicable to the isolated hosting environment provided by a VPS server. Speaking of which, the amendments and configurations made by you on your own VPS stay unaffected with regards to your neighbor’s VPS.

VPS Hosting Benefits

This type of hosting service offers you the best of a shared hosting whilst providing you the resourcefulness of a dedicated server. Following are the main benefits provided to you by a VPS server;

Privacy: Since there is no need to share your OS with your fellow VPS users, no other web application can potentially get access to your database.

Customization: The VPS server enables you to have your own OS. It simply indicates that you have the full access to your server applications like MySQL, PHP, and Apache. In order to customize any of these services, you can make the most suited changes.

Control: If it is the heavy application you are dealing with that requires a system restart once installed, you can do it with ease. Even though technically you are having a shared hosting (virtual in nature, so to speak), the system restart does not affect your fellow VPS users.

Dedicated Resources: A VPS server allows you to utilize a dedicated amount of CPU power, RAM, Disk Space, etc. Unlike shared hosting , no other user can utilize your hosting resources. In simple, you have the full access to your customized resources.

When Should You Go For A VPS Hosting?

As long as your website deals with a low traffic, you may feel content with the low budget shared hosting. However, soon as your website begins to catch up pace with the traffic, you may feel a sudden decrease in speed of your website.

The majority of the shared hosting solutions start giving up with the performance when you add more content to your website or when the viewer traffic begins to increase. At this point, you must move on to the VPS hosting.

Another indicator that points you in the direction of VPS hosting is slow page load. However, overload can also slow down your website, but it wouldn’t be much of a problem if you’ve opted a VPS plan.

If these signs are more significant and prominent in nature, you need an immediate assistance of a VPS hosting solution. Also, it’s a clear indication that your existing hosting plan can no longer satisfy the growing requirements of your website.

Reasons For Choosing A VPS Plan

Bandwidth consumption begins to stir up when the viewer traffic starts increasing. This is just one factor that makes a user to switch to a VPS plan. Check out the below mentioned additional reasons that strengthens the fact;

  1. If it is the android API you want to run with full control, VPS can be a good value for an investment.
  2. To maintain the performance and speed of the website to manage the coexisting heavy traffic.
  3. If it is the new startup venture you are investing in, starting with VPS can be your good option.
  4. VPS can be highly supportive if you are publishing an Android app on Google and it is the backend API storage in the server that is concerning you.
  5. VPS provide you better root access to run MySQL.

Facts You Must Know Before Selecting A VPS Plan

The companies that cannot afford dedicated hosting service, may opt for a VPS plan. Here are some of the important facts that you might want to check before choosing a VPS hosting package;

  1. It is highly cost-effective and affordable in nature as compared to the dedicated hosting. No question, it is costlier than its shared hosting counterpart, but it does provide you the necessary support.
  2. It comes with several essential features such as customization, control panel and so forth. It is one of the most reliable and flexible hosting solutions that you can opt for. Speaking of which, it is ideal for dealing with heavy traffic and data storage.
  3. It includes no maintenance cost and tackles the downtime problem almost instantaneously. Therefore, your website almost stays unaffected by any possible discrepancy that comes forth.
  4. The top-notch security keeps you safe from the malicious cyber attack and thus, your investment stays unharmed.

Conclusion

There are several other benefits that you can add to your bucket list by opting this hosting service. Nowadays, it becomes increasingly demanding among the growing businesses and people who deal with heavy traffic. To get a better feel for this hosting service, must try one.  

The post What Is VPS Hosting And Why Do You Need It? appeared first on The Crazy Programmer.



Google Hacker Discloses New Linux Kernel Vulnerability and PoC Exploit

A cybersecurity researcher with Google Project Zero has released the details, and a proof-of-concept (PoC) exploit for a high severity vulnerability that exists in Linux kernel since kernel version 3.16 through 4.18.8. Discovered by white hat hacker Jann Horn, the kernel vulnerability (CVE-2018-17182) is a cache invalidation bug in the Linux memory management subsystem that leads to


Thursday, 27 September 2018

Latest Hacking News Podcast #131

UEFI Rootkit spotted in the wild for the first time, Port of San Diego suffers ransomware attack and new Apple Mobile Device Management vulnerability on episode 131 of the Latest Hacking News Podcast.

Latest Hacking News Podcast #131 on Latest Hacking News.



16-Year-Old Boy Who Hacked Apple's Private Systems Gets No Jail Time

An Australian teenager who pleaded guilty to break into Apple's private systems multiple times over several months and download some 90GB of secure files has avoided conviction and will not serve time in prison. An Australian Children's Court has given the now 19-year-old adult defendant, who was 16 at the time of committing the crime, a probation order of eight months, though the magistrate


Build a Simple REST API with Node and OAuth 2.0

This article was originally published on the Okta developer blog. Thank you for supporting the partners who make SitePoint possible.

JavaScript is used everywhere on the web - nearly every web page will include at least some JavaScript, and even if it doesn’t, your browser probably has some sort of extension that injects bits of JavaScript code on to the page anyway. It’s hard to avoid in 2018.

JavaScript can also be used outside the context of a browser, for anything from hosting a web server to controlling an RC car or running a full-fledged operating system. Sometimes you want a couple of servers to talk to each other, whether on a local network or over the internet.

Today, I’ll show you how to create a REST API using Node.js, and secure it with OAuth 2.0 to prevent unwarranted requests. REST APIs are all over the web, but without the proper tools require a ton of boilerplate code. I’ll show you how to use a couple of amazing tools that make it all a breeze, including Okta to implement the Client Credentials Flow, which securely connects two machines together without the context of a user.

Build Your Node Server

Setting up a web server in Node is quite simple using the Express JavaScript library. Make a new folder that will contain your server.

$ mkdir rest-api

Node uses a package.json to manage dependencies and define your project. To create one, use npm init, which will ask you some questions to help you initialize the project. For now, you can use standard JS to enforce a coding standard, and use that as the tests.

$ cd rest-api

$ npm init
This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.

See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install <pkg>` afterwards to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
package name: (rest-api)
version: (1.0.0)
description: A parts catalog
entry point: (index.js)
test command: standard
git repository:
keywords:
author:
license: (ISC)
About to write to /Users/Braden/code/rest-api/package.json:

{
  "name": "rest-api",
  "version": "1.0.0",
  "description": "A parts catalog",
  "main": "index.js",
  "scripts": {
    "test": "standard"
  },
  "author": "",
  "license": "ISC"
}


Is this OK? (yes)

The default entry point is index.js, so you should create a new file by that name. The following code will get you a really basic server that doesn’t really do anything but listens on port 3000 by default.

index.js

const express = require('express')
const bodyParser = require('body-parser')
const { promisify } = require('util')

const app = express()
app.use(bodyParser.json())

const startServer = async () => {
  const port = process.env.SERVER_PORT || 3000
  await promisify(app.listen).bind(app)(port)
  console.log(`Listening on port ${port}`)
}

startServer()

The promisify function of util lets you take a function that expects a callback and instead will return a Promise, which is the new standard as far as handling asynchronous code. This also lets us use the relatively new async/await syntax and make our code look much prettier.

In order for this to work, you need to install the dependencies that you require at the top of the file. Add them using npm install. This will automatically save some metadata to your package.json file and install them locally in a node_modules folder.

Note: You should never commit node_modules to source control because it tends to become bloated quickly, and the package-lock.json file will keep track of the exact versions you used to that if you install this on another machine they get the same code.

$ npm install express@4.16.3 util@0.11.0

For some quick linting, install standard as a dev dependency, then run it to make sure your code is up to par.

$ npm install --save-dev standard@11.0.1
$ npm test

> rest-api@1.0.0 test /Users/bmk/code/okta/apps/rest-api
> standard

If all is well, you shouldn’t see any output past the > standard line. If there’s an error, it might look like this:

$ npm test

> rest-api@1.0.0 test /Users/bmk/code/okta/apps/rest-api
> standard

standard: Use JavaScript Standard Style (https://standardjs.com)
standard: Run `standard --fix` to automatically fix some problems.
  /Users/Braden/code/rest-api/index.js:3:7: Expected consistent spacing
  /Users/Braden/code/rest-api/index.js:3:18: Unexpected trailing comma.
  /Users/Braden/code/rest-api/index.js:3:18: A space is required after ','.
  /Users/Braden/code/rest-api/index.js:3:38: Extra semicolon.
npm ERR! Test failed.  See above for more details.

Now that your code is ready and you have installed your dependencies, you can run your server with node . (the . says to look at the current directory, and then checks your package.json file to see that the main file to use in this directory is index.js):

$ node .

Listening on port 3000

To test that it’s working, you can use the curl command. There are no endpoints yet, so express will return an error:

$ curl localhost:3000 -i
HTTP/1.1 404 Not Found
X-Powered-By: Express
Content-Security-Policy: default-src 'self'
X-Content-Type-Options: nosniff
Content-Type: text/html; charset=utf-8
Content-Length: 139
Date: Thu, 16 Aug 2018 01:34:53 GMT
Connection: keep-alive

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Error</title>
</head>
<body>
<pre>Cannot GET /</pre>
</body>
</html>

Even though it says it’s an error, that’s good. You haven’t set up any endpoints yet, so the only thing for Express to return is a 404 error. If your server wasn’t running at all, you’d get an error like this:

$ curl localhost:3000 -i
curl: (7) Failed to connect to localhost port 3000: Connection refused

Build Your REST API with Express, Sequelize, and Epilogue

Now that you have a working Express server, you can add a REST API. This is actually much simpler than you might think. The easiest way I’ve seen is by using Sequelize to define your database schema, and Epilogue to create some REST API endpoints with near-zero boilerplate.

You’ll need to add those dependencies to your project. Sequelize also needs to know how to communicate with the database. For now, use SQLite as it will get us up and running quickly.

npm install sequelize@4.38.0 epilogue@0.7.1 sqlite3@4.0.2

Create a new file database.js with the following code. I’ll explain each part in more detail below.

database.js

const Sequelize = require('sequelize')
const epilogue = require('epilogue')

const database = new Sequelize({
  dialect: 'sqlite',
  storage: './test.sqlite',
  operatorsAliases: false
})

const Part = database.define('parts', {
  partNumber: Sequelize.STRING,
  modelNumber: Sequelize.STRING,
  name: Sequelize.STRING,
  description: Sequelize.TEXT
})

const initializeDatabase = async (app) => {
  epilogue.initialize({ app, sequelize: database })

  epilogue.resource({
    model: Part,
    endpoints: ['/parts', '/parts/:id']
  })

  await database.sync()
}

module.exports = initializeDatabase

Now you just need to import that file into your main app and run the initialization function. Make the following additions to your index.js file.

index.js

@@ -2,10 +2,14 @@ const express = require('express')
 const bodyParser = require('body-parser')
 const { promisify } = require('util')

+const initializeDatabase = require('./database')
+
 const app = express()
 app.use(bodyParser.json())

 const startServer = async () => {
+  await initializeDatabase(app)
+
   const port = process.env.SERVER_PORT || 3000
   await promisify(app.listen).bind(app)(port)
   console.log(`Listening on port ${port}`)

You can now test for syntax errors and run the app if everything seems good:

$ npm test && node .

> rest-api@1.0.0 test /Users/bmk/code/okta/apps/rest-api
> standard

Executing (default): CREATE TABLE IF NOT EXISTS `parts` (`id` INTEGER PRIMARY KEY AUTOINCREMENT, `partNumber` VARCHAR(255), `modelNu
mber` VARCHAR(255), `name` VARCHAR(255), `description` TEXT, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL);
Executing (default): PRAGMA INDEX_LIST(`parts`)
Listening on port 3000

In another terminal, you can test that this is actually working (to format the JSON response I use a json CLI, installed globally using npm install --global json):

$ curl localhost:3000/parts
[]

$ curl localhost:3000/parts -X POST -d '{
  "partNumber": "abc-123",
  "modelNumber": "xyz-789",
  "name": "Alphabet Soup",
  "description": "Soup with letters and numbers in it"
}' -H 'content-type: application/json' -s0 | json
{
  "id": 1,
  "partNumber": "abc-123",
  "modelNumber": "xyz-789",
  "name": "Alphabet Soup",
  "description": "Soup with letters and numbers in it",
  "updatedAt": "2018-08-16T02:22:09.446Z",
  "createdAt": "2018-08-16T02:22:09.446Z"
}

$ curl localhost:3000/parts -s0 | json
[
  {
    "id": 1,
    "partNumber": "abc-123",
    "modelNumber": "xyz-789",
    "name": "Alphabet Soup",
    "description": "Soup with letters and numbers in it",
    "createdAt": "2018-08-16T02:22:09.446Z",
    "updatedAt": "2018-08-16T02:22:09.446Z"
  }
]

What’s Going On Here?

Feel free to skip this section if you followed along with all that, but I did promise an explanation.

The Sequelize function creates a database. This is where you configure details, such as what dialect of SQL to use. For now, use SQLite to get up and running quickly.

const database = new Sequelize({
  dialect: 'sqlite',
  storage: './test.sqlite',
  operatorsAliases: false
})

Once you’ve created the database, you can define the schema for it using database.define for each table. Create a table called parts with a few useful fields to keep track of parts. By default, Sequelize also automatically creates and updates id, createdAt, and updatedAt fields when you create or update a row.

const Part = database.define('parts', {
  partNumber: Sequelize.STRING,
  modelNumber: Sequelize.STRING,
  name: Sequelize.STRING,
  description: Sequelize.TEXT
})

Epilogue requires access to your Express app in order to add endpoints. However, app is defined in another file. One way to deal with this is to export a function that takes the app and does something with it. In the other file when we import this script, you would run it like initializeDatabase(app).

Epilogue needs to initialize with both the app and the database. You then define which REST endpoints you would like to use. The resource function will include endpoints for the GET, POST, PUT, and DELETE verbs, mostly automagically.

To actually create the database, you need to run database.sync(), which returns a Promise. You’ll want to wait until it’s finished before starting your server.

The module.exports command says that the initializeDatabase function can be imported from another file.

const initializeDatabase = async (app) => {
  epilogue.initialize({ app, sequelize: database })

  epilogue.resource({
    model: Part,
    endpoints: ['/parts', '/parts/:id']
  })

  await database.sync()
}

module.exports = initializeDatabase

Secure Your Node + Express REST API with OAuth 2.0

Now that you have a REST API up and running, imagine you’d like a specific application to use this from a remote location. If you host this on the internet as is, then anybody can add, modify, or remove parts at their will.

To avoid this, you can use the OAuth 2.0 Client Credentials Flow. This is a way of letting two servers communicate with each other, without the context of a user. The two servers must agree ahead of time to use a third-party authorization server. Assume there are two servers, A and B, and an authorization server. Server A is hosting the REST API, and Server B would like to access the API.

  • Server B sends a secret key to the authorization server to prove who they are and asks for a temporary token.
  • Server B then consumes the REST API as usual but sends the token along with the request.
  • Server A asks the authorization server for some metadata that can be used to verify tokens.
  • Server A verifies the Server B’s request.
    • If it’s valid, a successful response is sent and Server B is happy.
    • If the token is invalid, an error message is sent instead, and no sensitive information is leaked.

Create an Authorization Server

This is where Okta comes into play. Okta can act as an authorization server to allow you to secure your data. You’re probably asking yourself “Why Okta? Well, it’s pretty cool to build a REST app, but it’s even cooler to build a secure one. To achieve that, you’ll want to add authentication so users have to log in before viewing/modifying groups. At Okta, our goal is to make identity management a lot easier, more secure, and more scalable than what you’re used to. Okta is a cloud service that allows developers to create, edit, and securely store user accounts and user account data, and connect them with one or multiple applications. Our API enables you to:

If you don’t already have one, sign up for a forever-free developer account, and let’s get started!

After creating your account, log in to your developer console, navigate to API, then to the Authorization Servers tab. Click on the link to your default server.

From this Settings tab, copy the Issuer field. You’ll need to save this somewhere that your Node app can read. In your project, create a file named .env that looks like this:

.env

ISSUER=https://{yourOktaDomain}/oauth2/default

The value for ISSUER should be the value from the Settings page’s Issuer URI field.

Higlighting the issuer URL.

Note: As a general rule, you should not store this .env file in source control. This allows multiple projects to use the same source code without needing a separate fork. It also makes sure that your secure information is not public (especially if you’re publishing your code as open source).

Next, navigate to the Scopes tab. Click the Add Scope button and create a scope for your REST API. You’ll need to give it a name (e.g. parts_manager) and you can give it a description if you like.

Add scope screenshot.

You should add the scope name to your .env file as well so your code can access it.

.env

ISSUER=https://{yourOktaDomain}/oauth2/default
SCOPE=parts_manager

Now you need to create a client. Navigate to Applications, then click Add Application. Select Service, then click Next. Enter a name for your service, (e.g. Parts Manager), then click Done.

The post Build a Simple REST API with Node and OAuth 2.0 appeared first on SitePoint.



Pangu Hackers have Jailbroken iOS 12 on Apple's New iPhone XS

Bad news for Apple. The Chinese hacking team Pangu is back and has once again surprised everyone with a jailbreak for iOS 12 running on the brand-new iPhone XS. Well, that was really fast. Pangu jailbreak team has been quiet for a while, since it last released the untethered jailbreak tool for iOS 9 back in October 2015. <!-- linkads --> Jailbreaking is a process of removing limitations on


Cybersecurity Researchers Spotted First-Ever UEFI Rootkit in the Wild

Cybersecurity researchers at ESET have unveiled what they claim to be the first-ever UEFI rootkit being used in the wild, allowing hackers to implant persistent malware on the targeted computers that could survive a complete wipe of a target computer's hard drive. Dubbed LoJax, the UEFI rootkit is part of a malware campaign conducted by the infamous Sednit group, also known as APT28, Fancy Bear,


Dirhunt – Search and Analyze Target Domain Directories

Dirhunt is a python tool that can quickly search directories on target domains to find interesting directories and file locations.

Dirhunt – Search and Analyze Target Domain Directories on Latest Hacking News.



VPNFilter Router Malware Adds 7 New Network Exploitation Modules

Security researchers have discovered even more dangerous capabilities in VPNFilter—the highly sophisticated multi-stage malware that infected 500,000 routers worldwide in May this year, making it much more widespread and sophisticated than earlier. Attributed to Russia's APT 28, also known as 'Fancy Bear,' VPNFilter is a malware platform designed to infect routers and network-attached storage


LoJax: First UEFI rootkit found in the wild, courtesy of the Sednit group

ESET researchers have shown that the Sednit operators used different components of the LoJax malware to target a few government organizations in the Balkans as well as in Central and Eastern Europe

The post LoJax: First UEFI rootkit found in the wild, courtesy of the Sednit group appeared first on WeLiveSecurity



SheIn Data Breach Exposed Personal Details 6.4 Million Customers To Hackers

After so many private and government organizations suffering data breaches, a US-based fashion retailer now enters the list. Reportedly, the

SheIn Data Breach Exposed Personal Details 6.4 Million Customers To Hackers on Latest Hacking News.



ex-NSA Hacker Discloses macOS Mojave 10.14 Zero-Day Vulnerability

The same day Apple released its latest macOS Mojave operating system, a security researcher demonstrated a potential way to bypass new privacy implementations in macOS using just a few lines of code and access sensitive user data. On Monday, Apple started rolling out its new macOS Mojave 10.14 operating system update to its users, which includes a number of new privacy and security controls,


Wednesday, 26 September 2018

Latest Hacking News Podcast #130

Uber agrees to pay $148m in data breach settlement, VPNFilter gains more capabilities and another banking trojan found on Google Play on today's Latest Hacking News Podcast.

Latest Hacking News Podcast #130 on Latest Hacking News.



How to Set Up a Reverse NGINX Proxy on Alibaba Cloud

This article was created in partnership with Alibaba Cloud. Thank you for supporting the partners who make SitePoint possible.

Think you got a better tip for making the best use of Alibaba Cloud services? Tell us about it and go in for your chance to win a Macbook Pro (plus other cool stuff). Find out more here.

Need to serve many websites from a single Linux box, optimizing resources, and automating the site launch process? Let’s get serious then, and set up a production-ready environment using Ubuntu, NGINX, and Docker — all of it on Alibaba Cloud.

This is a somewhat advanced tutorial, and we'll assume some knowledge of networking, server administration, and software containers.

Understanding the Scenario

If you are looking at this guide, chances are that you need to manage a cluster of servers, or an increasing number of websites — if not both — and are looking at what your options are for a secure, performant, and flexible environment. Well then, you came to the right place!

Why a Reverse Proxy

In a nutshell, a reverse proxy takes a request from a client (normally from the Internet), forwards it to a server that can fulfill it (normally on an Intranet), and finally returns the server's response back to the client.

Reverse proxy

Those making requests to the proxy may not be aware of the internal network.

It is, in a way, similar to a load balancer — but implementing a load balancer only makes sense when you have multiple servers. You can deploy a reverse proxy with just one web server, and this can be particularly useful when there are different configuration requirements behind those end servers. So the reverse proxy is the "public face" sitting at the edge of the app's network, handling all of the requests.

There are some benefits to this approach:

  • Performance. A number of web acceleration techniques that can be implemented, including:
    • Compression: server responses can be compressed before returning them to the client to reduce bandwidth.
    • SSL termination: decrypting requests and encrypting responses can free up resources on the back-end, while securing the connection.
    • Caching: returning stores copies of content for when the same request is placed by another client, can decrease response time and load on the back-end server.
  • Security. Malicious clients cannot directly access your web servers, with the proxy effectively acting as an additional defense; and the number of connections can be limited, minimizing the impact of distributed denial-of-service (DDoS) attacks.
  • Flexibility. A single URL can be the access point to multiple servers, regardless of the structure of the network behind them. This also allows requests to be distributed, maximizing speed and preventing overload. Clients also only get to know the reverse proxy's IP address, so you can transparently change the configuration for your back-end as it better suits your traffic or architecture needs.

Why NGINX

NGINX logo

NGINX Plus and NGINX are the best-in-class reverse-proxy solutions used by high-traffic websites such as Dropbox, Netflix, and Zynga. More than 287 million websites worldwide, including the majority of the 100,000 busiest websites, rely on NGINX Plus and NGINX to deliver their content quickly, reliably, and securely.

What Is a Reverse Proxy Server? by NGINX.

Apache is great and probably best for what it's for — a multi-purpose web server, all batteries included. But because of this very reason, it can be more resource hungry as well. Also, Apache is multi-threaded even for single websites, which is not a bad thing in and of itself, especially for multi-core systems, but this can add a lot of overhead to CPU and memory usage when hosting multiple sites.

Tweaking Apache for performance is possible, but it takes savvy and time. NGINX takes the opposite approach in its design — a minimalist web server that you need to tweak in order to add more features in, which to be fair, also takes some savvy. If the topic interests you, a well-established hosting company wrote an interesting piece comparing the two: Apache vs NGINX: Practical Considerations.

In short, NGINX beats Apache big time out-of-the-box performance and resource consumption-wise. For a single site you can chose not to even care, on a cluster or when hosting a many sites, NGINX will surely make a difference.

Why Alibaba Cloud

Alibaba Cloud logo

Part of the Alibaba Group (Alibaba.com, AliExpress), Alibaba Cloud has been around for nearly a decade at the time of this writing. It is China's largest public cloud service provider, and the third of the world; so it isn't exactly a "new player" in the cloud services arena.

However, it hasn't been until somewhat recently that Alibaba rebranded its Aliyun cloud services company and put together a fully comprehensive set of products and services, and decidedly stepped out of the Chinese and Asian markets to dive into the "Western world".

On a Side-by-Side Comparison of AWS, Google Cloud and Azure, we did a full review of what you can do in the cloud — elastic computing, database services, storage and CDN, application service, domain and website, security, networking, analytics, ... and yes, Alibaba Cloud covers it all.

Deploying to Alibaba Cloud

You'll need an Alibaba Cloud account before you can set up your Linux box. And the good news is that you can get one for free! For the full details see How to Sign Up and Get Started.

For this guide will use Ubuntu Linux, so you can see the How to Set Up Your First Ubuntu 16.04 Server on Alibaba Cloud) guide. Mind you, you could use Debian, CentOS, and in fact, you can go ahead and check 3 Ways to Set Up a Linux Server on Alibaba Cloud.

Once you get your Alibaba Cloud account and your Linux box is up and running, you're good to go.

Hands On!

Installing NGINX

If we wanted to use the whole process ourselves, we would first need to install NGINX.

On Ubuntu we'd use the following commands:

$ sudo apt-get update
$ sudo apt-get install nginx

And you can check the status of the web server with systemctl:

$ systemctl status nginx    

With systemctl you can also stop/start/restart the server, and enable/disable the launch of NGINX at boot time.

These are the two main directories of interest for us:

  • /var/www/html NGINX default website location.
  • /etc/nginx NGINX configuration directory.

Now, setting a reverse proxy can be a somewhat cumbersome enterprise (and there are several guides that cover this process), as there are a number of network settings we need to go through, and files we need to update as we add sites/nodes behind our proxy.

That is, of course, unless we automate the whole thing using software containers...

Docker to the Rescue

Before we can start using software containers to automate our workflow, we first we need to install Docker, which for Ubuntu is a fairly straight forward process.

Uninstall any old version:

$ sudo apt-get remove docker docker-engine docker.io

Install the latest Docker CE version:

$ sudo apt-get update
$ sudo apt-get install docker-ce

If you want to install a specific Docker version, or set up the Docker repository, see Get Docker CE for Ubuntu.

Setting the Network

Part of setting a reverse proxy infrastructure is properly setting networking rules.

So let's create a network with Docker:

$ docker network create nginx-proxy

And believe or not, the network is set!

NGINX-Proxy!

Now that we have Docker running on our Ubuntu server, we can streamline the process of installing, setting up the reverse proxy, and launching new sites.

Jason Wilder did an awesome job putting together a Docker image that does exactly that--jwilder/nginx-proxy, a automated NGINX proxy for Docker containers using docker-gen, that works perfectly out-of-the-box.

Here's how you can run the proxy:

$ docker run -d -p 80:80 -p 443:443 --name nginx-proxy --net nginx-proxy -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy

Essentially:

  • we told Docker to run NGINX as a daemon/service (-d),
  • mapped the proxy's HTTP and HTTPS ports (80 and 443) the web server(s) ports behind it (-p 80:80 -p 443:443),
  • named the NGINX proxy for future reference (--name nginx-proxy),
  • used the network we previously set (--net nginx-proxy),
  • mapped the UNIX socket that Docker daemon is listening to, to use it across the network (-v /var/run/docker.sock:/tmp/docker.sock:ro).

And believe or not, the NGINX reverse proxy is up and running!

Launching Sites, Lots of Sites

Normally when using Docker you would launch a "containerized" application, being a standard WordPress site, a specific Moodle configuration, or your own images with your own custom apps.

Launching a proxied container now is as easy as specifying the virtual your domain with VIRTUAL_HOST=subdomain.yourdomain.com:

The post How to Set Up a Reverse NGINX Proxy on Alibaba Cloud appeared first on SitePoint.



Firefox DoS Proven to Crash Browsers and Sometimes Even Users PC’s

Last week, a security researcher pointed out how a CSS-based attack could crash iPhones, iPads, and Mac devices. The same

Firefox DoS Proven to Crash Browsers and Sometimes Even Users PC’s on Latest Hacking News.



Twitter patches bug that may have spilled users’ private messages

The flaw affected one of the platform’s APIs between May 2017 and September 10 of this year, when it was patched “within hours”

The post Twitter patches bug that may have spilled users’ private messages appeared first on WeLiveSecurity



Your Privacy is at Risk From Using a VPN: An Interview With Scott Arciszewski

We recently interviewed security engineer Scott Arciszewski, and asked him a few questions about security and weak cryptography issues in

Your Privacy is at Risk From Using a VPN: An Interview With Scott Arciszewski on Latest Hacking News.



Defending your company from cyberattack

ESET CTO Juraj Malcho outlines some of the ways in which organizations can reduce their cybersecurity risk

The post Defending your company from cyberattack appeared first on WeLiveSecurity



New Linux Kernel Bug Affects Red Hat, CentOS, and Debian Distributions

Security researchers have published the details and proof-of-concept (PoC) exploits of an integer overflow vulnerability in the Linux kernel that could allow an unprivileged user to gain superuser access to the targeted system. The vulnerability, discovered by cloud-based security and compliance solutions provider Qualys, which has been dubbed "Mutagen Astronomy," affects the kernel versions


5 Essential Tips for Writing Computer Science Research Project

Working on computer science research projects can be a difficult task, partly because computer science projects are unlike research projects in any other discipline. Depending on the area of study, a research project can be defined differently. A general definition of a research project states that a research project aims to take old research (done by others) and either expand on it or create something new with the discovered research.

Often times with computer science classes, and in similar classes, professors will assign a research project as one of the final projects of the class. These projects can involve doing a wide array of things, from making a new computer program to using artificial intelligence in a new way. Writing a paper about the research project is a common aspect to a research project. The paper is usually used to explain the project and show the results and/or findings of the project.

5 Essential Tips for Writing a Computer Science Research Project

Image Source

Below are some of the best tips to help computer science students pick a research project and how to write about their chosen topic.

1. Pick the Project Topic

The first thing to do when working on a professional computer science research paper is picking the topic. In computer science, there are a variety of research areas to choose from. Below is a list of just a few examples of research project topics that students may want to consider.

  • Medical apps
  • Education apps
  • Entertainment apps
  • Information storage and retrieval
  • Communication tools
  • Social media research
  • Video/media creation and editing
  • Virus/malware protection

2. Research and Take Notes

Once students have picked have picked their research paper, it’s time to begin the actual research. Since the computer sciences are constantly changing as computers and other technological devices are also constantly changing, one of the best ways to research computer science is by using the Internet.

When working on research projects for other classes, like English or history, it is best to use book research. However, books on computer science can become outdated quickly, which is why it is some important to use Internet research, as it can update as quickly as the computer sciences can.

When researching online, students should make sure they are using reliable sources. One way to do this is to make sure that the articles or journals used are peer-reviewed. Peer-reviewed papers have been looked over by other professionals, so they are legitimate sources of true information.

Some great places to find peer-reviewed sources include the Journal of Computer Science, The International Journal of Computer Science, and The Journal of Computer and System Sciences.

Students should always take notes as they conduct their research. This way, they can save time as they will not need to reread entire articles as they work on their project or their research paper. It is not likely that anyone else will see a student’s notes; there is no specific way they need to be formatted.

If students do not have time to research, they can hire an academic writer to do the research for them. These writers can help to conduct the research, take notes, and even cite sources by writing a bibliography page. This can save a lot of time when it comes to the research project as a whole.

3. Conduct the Research Project

The next step in writing about a computer science research project is to actually conduct the project. While this may seem more like an obvious step than a tip, there is one important tip that goes along with it. Just like when a student researches and takes notes, a student should also take notes while they conduct their project. Also just like with researching, taking notes while conducting the project can also help to save time in the long-run.

4. Introduce the Project and Field

The very first thing that should be included in a research project writing, which is usually a long essay, is the introduction. There are two basic things that should be included in the introduction: the area of computer science the project is involved with and the actual topic/name of the research project.

According to the University of California, Irvine, there are about twenty research areas just within computer science. Each computer science research project will fall into one of these research areas. Below is a list of some of the most popular research areas in computer science. To learn more about any specific field, click on the link above.

  • Algorithms
  • Artificial intelligence
  • Graphic design
  • Databases
  • Embedded systems
  • Multimedia
  • Operating systems
  • Privacy
  • Programming
  • Software engineering

Once a student identifies the field they are working in, they should then go on to briefly summarize what it is their project topic is and/or what their hypothesis for their project was.

5. Writing about Research and Results

The body of the paper, which will take up the majority of the research project paper, should contain information that fully explains how the students went about making a hypothesis, testing their ideas, learning from their research, and more.

The beginning paragraph(s) of the body should explain the research that went into the paper. What sources did the student(s) use? What did they learn? What did they hope to expand on? These are some questions students should think about when writing their paper.

The middle of the body paragraph should focus on the research project itself. What steps did the student(s) take to conduct the project? Students should make this section of the paper as detailed as possible in order for the reader to have a good idea of what happened during the project.

The last body paragraph(s) should focus on the results of the project. Did the students achieve their goal? Did the project go as the student(s) expected? If the project did not produce the expected results, what did the actual results show? These questions and more should be answered to give the reader (most likely just the professor) a good idea of what the project actually accomplished.

If students follow these tips, then their computer science project and paper are sure to flow smoothly. Students should remember to take notes, remain focused, and write clearly. All of these things can help to make the project and its related paper a success.

The post 5 Essential Tips for Writing Computer Science Research Project appeared first on The Crazy Programmer.



Google Chrome Secretly Logs Users Activity On Google Sites

The launch of Google Chrome 69 kept everyone enthralled with a trail of reports for some new features. In one

Google Chrome Secretly Logs Users Activity On Google Sites on Latest Hacking News.



Ex-NSA Developer Gets 5.5 Years in Prison for Taking Top Secret Documents Home

A former NSA employee has been sentenced to five and a half years in prison for illegally taking a copy of highly classified documents and hacking tools to his home computer between 2010 and 2015, which were later stolen by Russian hackers. Nghia Hoang Pho, 68, of Ellicott City, Maryland—who worked as a developer with Tailored Access Operations (TAO) hacking group at the NSA since April 2006—


United Nations Mistakenly Exposed Sensitive Data to The Public

After a lot of organizations and spy firms confessing accidental exposure of their data, the recent incident lists an even

United Nations Mistakenly Exposed Sensitive Data to The Public on Latest Hacking News.



Tuesday, 25 September 2018

Latest Hacking News Podcast #129

The US Dept. of Commerce issues request for comments on new online privacy rules, two United Nations data leaks reported and Mozilla launches a free data breach notification service.

Latest Hacking News Podcast #129 on Latest Hacking News.



Seven Steps for Growth Hacking Your Business with Data

No data? No problem. You can growth hack your way to success in seven steps.

Whether you’re pre-launch or ready to scale, data can hold the key to your business’ growth. Even if you don’t have much data about your customers or product yet, you can still use data to growth hack your business by following these seven steps.

1. Define your business objectives

Before you can shoot for the stars you need to be clear about what you’re trying to achieve. While this is often easier than it sounds, it’s a crucial step because your goal will drive your strategy.

According to Simon Mathonnet, Chief of Digital Strategy for Splashbox, it’s important to translate your objective into something practical. For example, rather than saying your goal is to grow your business, it should be more specific — like you want to quit your job so you can focus full-time on the company, or you want to raise a Series A investment round.

2. Make your objectives measurable

Once you’ve defined your objective, it then needs to be translated into something that you can track. A good way to do this is to make it SMART. This stands for:

  • Specific: Make the objective clear and easy to grasp.
  • Measurable: Set a quantitative goal that can be measured.
  • Achievable: Get buy-in from your team and give them (and yourself) an incentive by having an objective that’s within reach.
  • Relevant: Your goal needs to make sense and be in line with what the business is trying to achieve.
  • Time: Be clear about when the objective needs to be achieved. This will give you something to look forward to.

The SMART objective for the two objectives above may be:

  • Quit your job to focus on your startup = Generate $X of revenue per month for three consecutive months.
  • Series A capital raising = Retain Y active users for three months before approaching investors.

3. Create a hypothesis

Once you’ve defined your goal you need to find a way to get closer to achieving it. One way to do this is to create a hypothesis that you can implement and test quickly. The hypothesis is essentially an educated guess or hunch based on what you know about your product or service and customers.

For example, if your objective is to grow revenue, then your hypothesis might be that people who look at three or more products on your website are more likely to purchase. This means you need to find a way to get people who visit your site to look at three or more products because you believe this will increase your revenue.

4. Collect data

To be able to test and measure your hypothesis you need to have data. The data sets a baseline — so you know your starting point — and measures your results. The type of data that you need will depend on your hypothesis.

If you’re pre-launch, you probably don’t have much customer data yet. Most startups also struggle with data because of their uniqueness — traditional, quantifiable data sources like market research may not have insights for your product or market segment. While it can be expensive to commission market research, thankfully there’s a plethora of technology that’s relatively inexpensive that can help you mine information and generate new data.

Some ways you can collect data include:

  • Google Analytics: This is useful if you have had many visitors to your site. It collects data on what interactions people have on your website, like how long they spent on your site, what pages are the most popular, what search terms they used and what links they clicked.
  • PoweredLocal: If you have a brick and mortar shopfront, this platform lets you collect information about your customers by offering them complimentary Wi-Fi access. When customers use social media or email to sign onto your network, you can find out who they are, what they like, and potentially sign them up to your newsletter or offers.
  • Online reviews: Review sites like Yelp or TripAdvisor serve two purposes. They let people who are looking for your product or service hear what you’re like directly from your customers, and they provide a way for customers to give you feedback. This feedback is data that you can use to identify opportunities to improve your customer experience.
  • HotJar: This heatmap tool lets you see how people use and respond to messages on your site by showing what they engage with. Unlike Google Analytics, you don’t need too many visitors to your site to start seeing what is attracting or repelling your customers.
  • Social Media: Research social media channels to see what customers are talking about. This may be either on your own social media pages or your competitors’. Social media platforms like Facebook and Twitter also have analytics tools that are often available for free. These can show you demographic and engagement information about your audience.
  • LeadChat: Use a live chat function on your website to get direct input from your customers. Find out your customer demographic and see what questions they ask to determine what they’re interested in or are struggling with.
  • Events and pitch nights: Collect anecdotal data by talking to potential customers, peers and competitors. You can find them at industry events, pitch nights and conferences.

The post Seven Steps for Growth Hacking Your Business with Data appeared first on SitePoint.



SHEIN-Fashion Shopping Site Suffers Data Breach Affecting 6.5 Million Users

U.S. online fashion retailer SHEIN has admitted that the company has suffered a significant data breach after unknown hackers stole personally identifiable information (PII) of almost 6.5 million customers. Based in North Brunswick and founded in 2008, SHEIN has become one of the largest online fashion retailers that ships to more than 80 countries worldwide. The site has been initially


Bloodhound – A Tool For Exploring Active Directory Domain Security

Bloodhound is an open source application used for analyzing security of active directory domains.  The tool is inspired by graph

Bloodhound – A Tool For Exploring Active Directory Domain Security on Latest Hacking News.



ZDResearch Advanced Web Hacking Training 2018 – Learn Online

Are you looking to master web hacking? Interested in a bug-hunting career? Do you want to land a job in cybersecurity? Are you already working as a security engineer, but want to further advance or refine your skills? If yes, read on. ZDResearch Advanced Web Hacking (AWH) course, including optional certification upon completion—is the answer. Last week, we sat with the ZDResearch training


How to improve hiring practices in cybersecurity

Should schools and businesses do more to combat the shortfall of cybersecurity professionals by changing the hiring process for those interested in having a career in the industry?

The post How to improve hiring practices in cybersecurity appeared first on WeLiveSecurity



Security In The Crypto World: Exchanges, Wallets, Personal Data. Kiev To Host The Largest Cybersecurity Forum In Eastern Europe

October 8-11, the international cybersecurity forum HackIT 4.0 will be held in Kiev, Ukraine. The annual forum aims to be

Security In The Crypto World: Exchanges, Wallets, Personal Data. Kiev To Host The Largest Cybersecurity Forum In Eastern Europe on Latest Hacking News.



Bitcoin Core Software Patches a Critical DDoS Attack Vulnerability

The Bitcoin Core development team has released an important update to patch a major DDoS vulnerability in its underlying software that could have been fatal to the Bitcoin Network, which is usually known as the most hack-proof and secure blockchain. The DDoS vulnerability, identified as CVE-2018-17144, has been found in the Bitcoin Core wallet software, which could potentially be exploited by


Temple of Doom – Vulnhub CTF Challenge Walkthrough

Temple of Doom is a Boot2Root CTF Challenge and is available at Vulnhub. This machine is intended for “Intermediates” and

Temple of Doom – Vulnhub CTF Challenge Walkthrough on Latest Hacking News.



Q&A with 17 year old OSCP, Kunal Khubchandani : His Thoughts on OSCP

Confused between choices? What to do, OSCP or, CEH or, CISSP? If you have decided to focus on becoming an

Q&A with 17 year old OSCP, Kunal Khubchandani : His Thoughts on OSCP on Latest Hacking News.



Monday, 24 September 2018

Latest Hacking News Podcast #128

Microsoft Ignite 2018 security announcements, new Mozilla Firefox browser attack and a recent Adwind RAT campaign on episode 128 of the Latest Hacking News Podcast.

Latest Hacking News Podcast #128 on Latest Hacking News.



What is Python used for?

In this article you will get to know what is python used for or its applications.

Python is an ubiquitous scripting language. Of course many of us might be aware of the magical things we can do with the help of python, that is another story all the way and we will talk about this sometimes later for sure. These days python has found its way to web development, app development, scientific and numeric field, business applications, GUI designing, automation, artificial intelligence, machine learning and what not.

So without investing much of our time let’s dive further and see what python has to offer to the world. We will go step by step further into this and try our best not to miss any point, but unfortunately if we do please let us know in the comment section as there are immense possibilities pertaining to python. Let us quickly start our journey with python and see some of the applications of python.

What is Python used for

What is Python used for? – Python Applications

Software Development and Testing

Though sometimes referred as a supporting language, the python can be used for the development of rigid software as a whole. This might be surprising to some people but true, that in this era, python is being widely used in professional software development at much larger scale. Companies like Google itself are using python programming language in the development and testing of it’s product. A lot of services being provided by google are deployed with the systems written in python and to surprise the YouTube’s homepage is too written in python (isn’t this cool?). Okay let us quickly now see some of the major softwares that we come across in our daily lives which are written in python.

  • Google: The google is among one of the prominent companies which uses python in its development and design. Python is proven to be efficiently handling the traffic deployed on google and its connected apps and is very well known for computing purposes.
  • YouTube: One of the most beloved apps these days which is very much in trend for leisure times is of course YouTube and to surprise is written in python.
  • Dropbox: Starting from storing documents at first, Dropbox has spread its wings now to store literally everything and the functionality of sharing and synchronising the stuffs saved has made it even more lovable to its audience and all this is powered by python.
  • Instagram: Instagram has become the most trending application for the purpose of sharing pictures and videos for its people. Apart from sharing, several other features are being provided by this application which is all powered by python and make it even more popular.
  • Quora: The next big giant after google which is proved good in providing solutions to all the queries posted in a most realistic way is none other than Quora. And all this thing is summed up to an application with the help of python.

Apart from the applications we have discussed above there exist lot more softwares that are powered by python. Some of them to name are Reddit, Spotify, Bitbucket, SurveyMonkey and Pinterest.

Also Read: Best Way to Learn Python

Web & Internet Development

Python is a scripting language used for the development of large scale web apps because of the features provided by it and not by other languages like .NET, PHP, etc. Also python provides several frameworks like Django and Pyramid and micro-frameworks like Flask and Bottle for easing the process of web development. Apart from these, python also come equipped with advanced content management systems such as Plone and Django CMS.

Talking about the internet, python consists of several libraries beforehand pertaining to different functionalities and features. The standard libraries of python provides support for different internet protocols:

  • E-mail processing
  • JSONHTML & XML
  • FTP, IMAP, and other Internet protocols support
  • Easy-to-use socket interface

Apart from these, python comes with a bunch of other libraries that facilitate its users to perform cool works. We will talk about these other libraries later in this post when required.

Development of Desktop GUIs

Python offers a bunch of alternatives for the development of GUI such as PyQt, Tkinter, Kivy, etc. But among these Tkinter is the mostly adopted option by the developers for the development of GUIs. Also, it is seen that python hasn’t gained that much of popularity in the field of GUI development professionally (although it was proven best in a survey done in 2014).

It is proven that there are better alternatives than Tkinter and PyQt or we can say python. Let us see what professionals has to offer when it comes to GUI development. It is advised to use .NET and C# while developing GUI for windows, Swift/Objective-C and Cocoa while on Mac and C++/Qt or Java/JavaFX while designing GUI for Linux platform.

While using python for the same, the Tkinter library is the most commonly used one for accomplishing the purpose as it is proven as the fastest and easier way to go with the development of the GUI. GUI creation with the help of Tkinter is much easier.

Kivy, which is the most recent among the modules is used for the purpose of writing multi-touch applications.

Business Applications and Finance

In recent years python has marked remarkable growth in the business, e-commerce, and trading sectors. It is also quite feasible to make ERPs with the help of modules provided by python. We can see nowadays python is primarily being used for the qualitative and quantitative analysis of stocks, cryptocurrencies etc. One of the prime application of python by the use of its modules like Numpy, Pandas, scipy, etc is prediction of stock prices and cryptos in business world. Being easier to maintain and comparatively lesser code density and its ability to be easily integrated with other languages and platforms, python is being one of the firsts choices.

The trading sector is the field which requires a lot of analysis which could be deployed easily using python. The trading experts make winning strategies and forecast the market trend using python and to your knowledge this application is accomplished by using Django framework using python.

Payment gateways can also be deployed via python using Django framework.

Scientific, Numeric and Automation

This is the most prominent and heated applications of python. Python nowadays is playing a vital role in the field of science, automation, artificial intelligence and machine learning and to some extent leaving R behind (based on individual preferences). Python offers a bunch of cool modules and libraries for scientific and numeric calculations, some of them are listed below as:

Pandas: Used for analysing the dataset (bunch of data).

SciPy: Used for science, engineering and math works.

IPython: Used for easy editing and recording of work sessions.

Python has remarkable work in field of automation, AI and machine learning. These days if we talk about machine learning models or about automating a vehicle or a spaceship, from Tesla to NASA, python is everywhere.

End Notes:

We have tried to cover the most prominent uses of this scripting languages and haven’t included the most obvious python applications as they are well-known. If there is any suggestion or query, please let us know in the comments below, we will be happy to help.

The post What is Python used for? appeared first on The Crazy Programmer.