Pageviews

Thursday, 26 July 2018

misuse of realpath function on POSIX platforms - CVE-2018-14338

Hi Internet,

Product Affected:  Exiv2, a C++ library and a command line utility to read and write Exif, IPTC and XMP image metadata.

Summary:  samples/geotag.cpp in the example code of Exiv2 0.26 misuses the realpath function on POSIX platforms (other than Apple platforms) where glibc is not used, possibly leading to a buffer overflow.

Example:
However their are multiple files in /samples/ which are responsible for their own tests.
conntest.cpp = This file is used when a user needs to test http/https/ftp/ssh/sftp vased connection via exiv2

Similarly I had a look on geotag.cpp file which is responsible for  reading gpx files and update images with GPS tags.
While this file used `realpath()` however realpath function is broken by design - POSIX.1-2001
According to the documentation of `realpath()`  the output buffer needs to be at least of size `PATH_MAX` specifying output buffers large enough to handle the maximum-size possible result from path manipulation functions.

But in geotag.cpp the buffer size was not equal to PATH_MAX nor set to NULL
#ifdef __APPLE__
                    char   buffer[1024];
#else
                    char*  buffer = NULL;
#endif
                    char*  path = realpath(arg,buffer);
                    if  ( t && path ) {
                        if ( options.verbose) printf("%s %ld %s",path,(long int)t,asctime(localtime(&t)));
                        gFiles.push_back(path);
                    }
                    if ( path && path != buffer ) ::free((void*) path);
                }
                if ( type == typeUnknown ) {
                    fprintf(stderr,"error: illegal syntax %s\n",arg);
                    result = resultSyntaxError ;
                }
            } break;
        }
}
However, in that instance, buffer  size comes from `uv__fs_pathmax_size()`. That function attempts to use `pathconf(path, _PC_PATH_MAX)` as noted in the realpath(3) docs. But over here(geotag.cpp) `uv__fs_pathmax_size()` nor `pathconf(path, _PC_PATH_MAX)` is in use.

Passing an inadequately-sized output buffer to a path manipulation function can result in a buffer overflow. Such functions include `realpath()` `readlink()` `PathAppend()` and others.

A issue was submitted to exiv2 team and a commit was pushed to patch this issue, to fix this they called `pathconf()` for linux, or they could have simply set `PATH_MAX+1` to resolve this.
diff --git a/samples/geotag.cpp b/samples/geotag.cpp
index 1355827..4da38af 100644
--- a/samples/geotag.cpp
+++ b/samples/geotag.cpp
@@ -806,8 +806,11 @@ int main(int argc,const char* argv[])
                 if ( options.verbose ) printf("%s %s ",arg,types[type]) ;
                 if ( type == typeImage ) {
                     time_t t    = readImageTime(std::string(arg)) ;
-#ifdef __APPLE__
+#if   defined(__APPLE__)
                     char   buffer[1024];
+#elif defined(__gnu_linux__)
+                    char   buffer[_MAX_PATH];
+                    pathconf(arg ,_MAX_PATH);
 #else
                     char*  buffer = NULL;
 #endif
CVE-2018-14338 was assign to this issue, Hope you like the read.
 
Regards
Dhiraj

Thursday, 19 July 2018

race condition in htslib - CVE-2018-14329

Hi Internet,

Product affected: htslib, a C library for high-throughput sequencing data format.

Summary: In HTSlib 1.8, a race condition in `cram/cram_io.c` might allow local users to overwrite arbitrary files via a symlink attack.

File faidx.c
       snprintf(fai_file, PATH_MAX, "%s.fai", fn);
        if (access(fai_file, R_OK) != 0)
            if (fai_build(fn) != 0)
                return NULL;
}
The API for the fai_build* functions is to take a filename. This inherently leaves a race. Code currently does things like :
`if (!access(...)) fai_build()`
or
`if (!open(...)) fai_build()`

Both of these suffer the same problem - an attacker could symlink the file between the access/open check and the decision to rebuild it. This solution makes the build function itself unlink the file and does an exclusive open so it'll fail if someone wins the race between unlink and hopen. There is one small fly in the ointment - we can currently do "samtools faidx s3:foo.fa" and this will open s3:foo.fa.fai. The unlink will fail, so we've now changed behaviour because previously we could overwrite an existing s3 fai file with a new one whereas now we require the user to manually delete their existing one first. Without having an `hunlink()` function we're scuppered on this.
@@ -493,7 +493,12 @@ static int fai_build3_core(const char *fn, const char *fnfai, const char *fngzi)
         goto fail;
     }
 
-    fp = hopen(fnfai, "wb");
+    if (hisremote(fnfai)) {
+        fp = hopen(fnfai, "wb");
+    } else {
+        unlink(fnfai);
+        fp = hopen(fnfai, "wbx");
+    }
 
     if ( !fp ) {
         hts_log_error("Failed to open %s index %s : %s", file_type, fnfai, strerror(errno));
The only possible workaround over here is to validate the filename first, so if the fasta file is fd_backend we use exclusive open, and don't otherwise. This is what is implemented here and a patch was deployed, hope you like the read.

Exploiting: I 'assume' that this can be exploited by hardlink.
exploit.c
#include 
int main()
{
  if (geteuid()!=0) exit(1);
  setuid(geteuid());
  char *args[] = { "/bin/sh", 0 };
  return execve(args[0], args, 0);
}
Trigger an race condition by creating a hardlink to the vulnerable program.
while :; do ln -f ./vulnpoc; ln -f ./exploit; done
Where, the htslib team says, probably this is not the way to exploit,  I hadn't really thought about the hardlink case, although again affecting those only applies to writeable / shared locations, so it's also high improbable to cause bugs either way. Hope you like the read.


Regards
Dhiraj

Friday, 13 July 2018

race condition in mstdlib - CVE-2018-14043

Hi Internet,

Affected Product: mstdlib is designed to provide features not normally found in libc but also to be used in place of libc.

Summary: mstdlib (aka the M Standard Library for C) 1.2.0 has incorrect file access control in situations where `M_fs_perms_can_access` attempts to delete an existing file (that lacks public read/write access) during a copy operation, related to `fs/m_fs.c and fs/m_fs_path.c`. An attacker could create the file and then would have access to the data.

Race Condition: A race condition occurs within concurrent environments, and is effectively a property of a code sequence. Depending on the context, a code sequence may be in the form of a function call, a small number of instructions, a series of program invocations, etc. [1]

Outcome: CVE-2018-14043 was assign to this and contributed to M Standard Library for C.

InProcess: I have requested to Internet Bug Bounty Program panel members to consider this for a bounty (fingerscrossed).

Well, this was tough to identify! The file `m_fs_perms_unx.c` on line number 277 use's `access()` to call.
if (access(norm_path, access_mode) == -1) {
If an attacker can change anything along the path between the call `access()` and the files actually used, attacker may exploit the race condition it is possible to make changes to a path between calling M_fs_perms_can_access and doing something with the path. However, this function is conforming to ISO/IEC 9945-1:1990 and the same security considerations apply when using `access()` directly.

As reported with a day quick commit was pushed in master branch.
@@ -101,6 +101,15 @@ static M_bool M_fs_isfileintodir(const char *p1, const char *p2, char **new_p2)
  return M_TRUE;
 }
 
+/* Used by copy and move to determine if we can write to the given path
+ * based on a file already existing there or not.
+ *
+ * access is used to determine existence because we don't want to overwrite
+ * if there already is a file. This is not guaranteed because if there is
+ * a race condition where a file is created after this check it will be
+ * overwritten. Not much we can do about that. It shouldn't pose a security
+ * issue since this is more of a request than a requirement.
+ */
 static M_bool M_fs_check_overwrite_allowed(const char *p1, const char *p2, M_uint32 mode)
 {
  M_fs_info_t  *info = NULL;
@@ -129,8 +138,7 @@ static M_bool M_fs_check_overwrite_allowed(const char *p1, const char *p2, M_uin
 
  if (type != M_FS_TYPE_DIR) {
   /* File exists at path. */
-  if (M_fs_perms_can_access(p2, M_FS_PERMS_MODE_NONE) == M_FS_ERROR_SUCCESS)
-  {
+  if (M_fs_perms_can_access(p2, M_FS_PERMS_MODE_NONE) == M_FS_ERROR_SUCCESS) {
    ret = M_FALSE;
    goto done;
   }
@@ -209,19 +217,6 @@ static M_fs_error_t M_fs_copy_file(const char *path_old, const char *path_new, M
  size_t         offset;
  M_fs_error_t   res;
 
- /* We're going to create/open/truncate the new file, then as we read the contents from the old file we'll write it
-   * to new file. */
- if (M_fs_perms_can_access(path_new, M_FS_PERMS_MODE_NONE) == M_FS_ERROR_SUCCESS) {
-  /* Try to delete the file since we'll be overwrite it. This is so when we create the file we create it without
-    * any permissions and to ensure that anything that has the file already open won't be able to read the new
-   * contents we're writing to the file or be able to change the perms. There is an unavoidable race condition
-   * between deleting and creating the file where someone could create the file and have access. However,
-   * depending on the OS they may have access even if the file is created with no perms... */
-  res = M_fs_delete(path_new, M_FALSE, NULL, M_FS_PROGRESS_NOEXTRA);
-  if (res != M_FS_ERROR_SUCCESS) {
-   return res;
-  }
- }
  /* Open the old file */
  res = M_fs_file_open(&fd_old, path_old, M_FS_BUF_SIZE, M_FS_FILE_MODE_READ|M_FS_FILE_MODE_NOCREATE, NULL);
  if (res != M_FS_ERROR_SUCCESS) {
@@ -236,6 +231,9 @@ static M_fs_error_t M_fs_copy_file(const char *path_old, const char *path_new, M
   }
   perms = M_fs_info_get_perms(info);
  }
+
+ /* We're going to create/open/truncate the new file, then as we read the contents from the old file we'll write it
+  * to new file. */
  res = M_fs_file_open(&fd_new, path_new, M_FS_BUF_SIZE, M_FS_FILE_MODE_WRITE|M_FS_FILE_MODE_OVERWRITE, perms);
  M_fs_info_destroy(info);
  if (res != M_FS_ERROR_SUCCESS) {
@@ -333,7 +331,7 @@ M_fs_error_t M_fs_move(const char *path_old, const char *path_new, M_uint32 mode
  }
 
  /* Normalize the old path and do basic checks that it exists. We'll leave really checking that the old path
-   * existing to rename because any check we perform may not be true when rename is called. */
+  * existing to rename because any check we perform may not be true when rename is called. */
  res = M_fs_path_norm(&norm_path_old, path_old, M_FS_PATH_NORM_RESALL, M_FS_SYSTEM_AUTO);
  if (res != M_FS_ERROR_SUCCESS) {
   M_free(norm_path_new);
@@ -351,7 +349,7 @@ M_fs_error_t M_fs_move(const char *path_old, const char *path_new, M_uint32 mode
   return res;
  }
 
-  /* There is a race condition where the path could not exist but be created between the exists check and calling
+ /* There is a race condition where the path could not exist but be created between the exists check and calling
   * rename to move the file but there isn't much we can do in this case. copy will delete and the file so this
   * situation won't cause an error. */
  if (!M_fs_check_overwrite_allowed(norm_path_old, norm_path_new, mode)) {
@@ -399,15 +397,15 @@ M_fs_error_t M_fs_move(const char *path_old, const char *path_new, M_uint32 mode
    res = M_fs_delete(norm_path_old, M_TRUE, NULL, M_FS_PROGRESS_NOEXTRA);
   } else {
    /* Failure - Delete the new files that were copied but only if we are not overwriting. We don't
-     * want to remove any existing files (especially if the dest is a dir). */
+    * want to remove any existing files (especially if the dest is a dir). */
    if (!(mode & M_FS_FILE_MODE_OVERWRITE)) {
     M_fs_delete(norm_path_new, M_TRUE, NULL, M_FS_PROGRESS_NOEXTRA);
    }
    res = M_FS_ERROR_GENERIC;
   }
  } else {
   /* Call the cb with the result of the move whether it was a success for fail. We call the cb only if the
-    * result of the move is not M_FS_ERROR_NOT_SAMEDEV because the copy operation will call the cb for us. */
+   * result of the move is not M_FS_ERROR_NOT_SAMEDEV because the copy operation will call the cb for us. */
   if (cb) {
    M_fs_progress_set_result(progress, res);
    if (!cb(progress)) {
@@ -465,7 +463,7 @@ M_fs_error_t M_fs_copy(const char *path_old, const char *path_new, M_uint32 mode
  }
 
  /* Normalize the old path and do basic checks that it exists. We'll leave really checking that the old path
-   * existing to rename because any check we perform may not be true when rename is called. */
+  * existing to rename because any check we perform may not be true when rename is called. */
  res = M_fs_path_norm(&norm_path_old, path_old, M_FS_PATH_NORM_RESALL, M_FS_SYSTEM_AUTO);
  if (res != M_FS_ERROR_SUCCESS) {
   M_free(norm_path_new);
@@ -485,7 +483,7 @@ M_fs_error_t M_fs_copy(const char *path_old, const char *path_new, M_uint32 mode
 
  type = M_fs_info_get_type(info);
 
-  /* There is a race condition where the path could not exist but be created between the exists check and calling
+ /* There is a race condition where the path could not exist but be created between the exists check and calling
   * rename to move the file but there isn't much we can do in this case. copy will delete and the file so this
   * situation won't cause an error. */
  if (!M_fs_check_overwrite_allowed(norm_path_old, norm_path_new, mode)) {
@@ -497,7 +495,7 @@ M_fs_error_t M_fs_copy(const char *path_old, const char *path_new, M_uint32 mode
 
  entries = M_fs_dir_entries_create();
  /* No need to destroy info  because it's now owned by entries and will be destroyed when entries is destroyed.
-   * M_FS_DIR_WALK_FILTER_READ_INFO_BASIC doesn't actually get the perms it's just there to ensure the info is
+  * M_FS_DIR_WALK_FILTER_READ_INFO_BASIC doesn't actually get the perms it's just there to ensure the info is
   * stored in the entry. */
  M_fs_dir_entries_insert(entries, M_fs_dir_walk_fill_entry(norm_path_new, NULL, type, info, M_FS_DIR_WALK_FILTER_READ_INFO_BASIC));
  if (type == M_FS_TYPE_DIR) {
@@ -523,7 +521,7 @@ M_fs_error_t M_fs_copy(const char *path_old, const char *path_new, M_uint32 mode
 
    type = M_fs_dir_entry_get_type(entry);
    /* The total isn't the total number of files but the total number of operations. 
-     * Making dirs and symlinks is one operation and copying a file will be split into
+    * Making dirs and symlinks is one operation and copying a file will be split into
     * multiple operations. Copying uses the M_FS_BUF_SIZE to read and write in
     * chunks. We determine how many chunks will be needed to read the entire file and
     * use that for the number of operations for the file. */
@@ -600,7 +598,7 @@ M_fs_error_t M_fs_copy(const char *path_old, const char *path_new, M_uint32 mode
  }
 
  /* Delete the file(s) if it could not be copied properly, but only if we are not overwriting.
-   * If we're overwriting then there could be other files in that location (especially if it's a dir). */
+  * If we're overwriting then there could be other files in that location (especially if it's a dir). */
  if (res != M_FS_ERROR_SUCCESS && !(mode & M_FS_FILE_MODE_OVERWRITE)) {
   M_fs_delete(path_new, M_TRUE, NULL, M_FS_PROGRESS_NOEXTRA);
  }
@@ -659,7 +657,7 @@ M_fs_error_t M_fs_delete(const char *path, M_bool remove_children, M_fs_progress
  entries = M_fs_dir_entries_create();
 
  /* Recursive directory deletion isn't intuitive. We have to generate a list of files and delete the list.
-   * We cannot delete as walk because not all file systems support that operation. The walk; delete; behavior
+  * We cannot delete as walk because not all file systems support that operation. The walk; delete; behavior
   * is undefined in Posix and HFS is known to skip files if the directory contents is modifies as the
   * directory is being walked. */
  if (type == M_FS_TYPE_DIR && remove_children) {
@@ -671,7 +669,7 @@ M_fs_error_t M_fs_delete(const char *path, M_bool remove_children, M_fs_progress
  }
 
  /* Add the original path to the list of entries. This may be the only entry in the list. We need to add
-   * it after a potential walk because we can't delete a directory that isn't empty.
+  * it after a potential walk because we can't delete a directory that isn't empty.
   * Note: 
   *   - The info will be owned by the entry and destroyed when it is destroyed. 
   *   - The basic info param doesn't get the info in this case. it's set so the info is stored in the entry. */
@@ -680,7 +678,7 @@ M_fs_error_t M_fs_delete(const char *path, M_bool remove_children, M_fs_progress
  len = M_fs_dir_entries_len(entries);
  if (cb) {
   /* Create the progress. The same progress will be used for the entire operation. It will be updated with
-    * new info as necessary. */
+   * new info as necessary. */
   progress = M_fs_progress_create();
 
   /* Get the total size of all files to be deleted if using the progress cb and size totals is set. */
PS: Don't try to delete the file when copying. It could cause a security issue if the file exists and doesn't allow other's to read/write, delete could allow someone to create the file and have access to the data.

I am still working on, how to weaponize this bug (I know most of my post say's this but, I am working on all those). Anyways, hope you like the read.

It was not a comprehensive review but addresses the concerns of this ticket. - said by member of mstdlib. 


Regards
Dhiraj

Wednesday, 11 July 2018

strncat() without bounds - Samsung TizenRT

Hi Internet,

Product affected: Samsung TizenRT is a lightweight RTOS-based platform to support low-end IoT devices

Summary: File external/webserver/http_client.c, `strncat()` was being used without calculating the remaining length of the destination string buffer in this case it might allow attackers to trigger a bufferoverflow via a long query that is processed by the `strcat` function.

I feel the risk is high because the length parameter appears to be a constant, instead of computing the number of characters left, anyways a quick patch was pushed for same.
#include 
@@ -593,8 +593,8 @@ void http_handle_file(struct http_client_t *client, int method, const char *url,
         } else
             valid = 1;
         /* Need to create file with unique file name */
-        strncat(path, url, HTTP_CONF_MAX_REQUEST_HEADER_URL_LENGTH);
-        strncat(path, "index.shtml", HTTP_CONF_MAX_REQUEST_HEADER_URL_LENGTH);
+        strncat(path, url, HTTP_CONF_MAX_REQUEST_HEADER_URL_LENGTH - strlen(path));
+        strncat(path, "index.shtml", HTTP_CONF_MAX_REQUEST_HEADER_URL_LENGTH - strlen(path));
         if (valid && (f = fopen(path, "w"))) {
             if (fputs(entity, f) < 0) {
                 HTTP_LOGE("Error: Fail to execute fputs\n");
@@ -619,7 +619,7 @@ void http_handle_file(struct http_client_t *client, int method, const char *url,
             HTTP_LOGE("ERROR: URL length is too long. Cannot send response\n");
         } else
             valid = 1;
-        if (valid && (f = fopen(strncat(path, url, HTTP_CONF_MAX_REQUEST_HEADER_URL_LENGTH), "w"))) {
+        if (valid && (f = fopen(strncat(path, url, HTTP_CONF_MAX_REQUEST_HEADER_URL_LENGTH - strlen(path)), "w"))) {
             if (fputs(entity, f) < 0) {
                 HTTP_LOGE("Error: Fail to execute fputs\n");
                 fclose(f);
@@ -643,7 +643,7 @@ void http_handle_file(struct http_client_t *client, int method, const char *url,
             HTTP_LOGE("ERROR: URL length is too long. Cannot send response\n");
         } else
             valid = 1;
-        if (valid && unlink(strncat(path, url, HTTP_CONF_MAX_REQUEST_HEADER_URL_LENGTH))) {
+        if (valid && unlink(strncat(path, url, HTTP_CONF_MAX_REQUEST_HEADER_URL_LENGTH - strlen(path)))) {
             HTTP_LOGE("fail to delete %s\n", url);
             if (http_send_response(client, 500, HTTP_ERROR_500, NULL) == HTTP_ERROR) {
                 HTTP_LOGE("Error: Fail to send response\n");
Now this prevent `bufferoverflow` by calculating the remaining length of the destination string buffer. Hope you like the read.


Regards
Dhiraj

Wednesday, 4 July 2018

strncat() without bounds - TOR

Hi Internet,


While going through the code of TOR it was observed that `backtrace.c` file which is located at `/src/lib/err/backtrace.c` at line number #L267-L268 was using `strncat()` which can be easily misused.
strncat(version, " ", sizeof(version)-1);
strncat(version, tor_version, sizeof(version)-1);
Example: Incorrectly computing the correct maximum size to add.

But, tor-security team marked this bug has low, because use of `strncat()` is a defence in depth mechanism that doesn't provide as much security as it should. Apart from that it cannot be influenceable by an attacker and they don't have control over this file and the version string.

Still a quick commit (patch) was push directly to the master branch of TOR
#include 
 #include 
 #include 
+#include 
 
 #ifdef HAVE_CYGWIN_SIGNAL_H
 #include 
@@ -264,16 +265,12 @@ dump_stack_symbols_to_error_fds(void)
 int
 configure_backtrace_handler(const char *tor_version)
 {
-  char version[128];
-  strncpy(version, "Tor", sizeof(version)-1);
+  char version[128] = "Tor\0";
 
   if (tor_version) {
-    strncat(version, " ", sizeof(version)-1);
-    strncat(version, tor_version, sizeof(version)-1);
+    snprintf(version, sizeof(version), "Tor %s", tor_version);
   }
 
-  version[sizeof(version) - 1] = 0;
-
   return install_bt_handler(version);
 }
Perhaps the reason `strncat()` was used to avoid including stuff from lib/string, This was nothing such great but hope you like the read.

Regards
Dhiraj